Exam2pass
0 items Sign In or Register
  • Home
  • IT Exams
  • Guarantee
  • FAQs
  • Reviews
  • Contact Us
  • Demo
Exam2pass > Oracle > Oracle Certifications > 1Z0-1084-20 > 1Z0-1084-20 Online Practice Questions and Answers

1Z0-1084-20 Online Practice Questions and Answers

Questions 4

Which one of the statements describes a service aggregator pattern?

A. It is implemented in each service separately and uses a streaming service

B. It involves implementing a separate service that makes multiple calls to other backend services

C. It uses a queue on both sides of the service communication

D. It involves sending events through a message broker

Buy Now

Correct Answer: B

this pattern isolates an operation that makes calls to multiple back-end microservices, centralizing its logic into a specialized microservice.

Questions 5

Your Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) administrator has created an

OKE cluster with one node pool in a public subnet. You have been asked to provide a log file from one of

the nodes for troubleshooting purpose.

Which step should you take to obtain the log file?

A. ssh into the node using public key.

B. ssh into the nodes using private key.

C. It is impossible since OKE is a managed Kubernetes service.

D. Use the username open and password to login.

Buy Now

Correct Answer: B

Kubernetes cluster is a group of nodes. The nodes are the machines running applications. Each node can be a physical machine or a virtual machine. The node's capacity (its number of CPUs and amount of memory) is defined when the node is created. A cluster comprises: - one or more master nodes (for high availability, typically there will be a number of master nodes) - one or more worker nodes (sometimes known as minions) Connecting to Worker Nodes Using SSH If you provided a public SSH key when creating the node pool in a cluster, the public key is installed on all worker nodes in the cluster. On UNIX and UNIX-like platforms (including Solaris and Linux), you can then connect through SSH to the worker nodes using the ssh utility (an SSH client) to perform administrative tasks. Note the following instructions assume the UNIX machine you use to connect to the worker node: Has the ssh utility installed. Has access to the SSH private key file paired with the SSH public key that was specified when the cluster was created. How to connect to worker nodes using SSH depends on whether you specified public or private subnets for the worker nodes when defining the node pools in the cluster. Connecting to Worker Nodes in Public Subnets Using SSH Before you can connect to a worker node in a public subnet using SSH, you must define an ingress rule in the subnet's security list to allow SSH access. The ingress rule must allow access to port 22 on worker nodes from source 0.0.0.0/0 and any source port To connect to a worker node in a public subnet through SSH from a UNIX machine using the ssh utility: 1- Find out the IP address of the worker node to which you want to connect. You can do this in a number of ways: Using kubectl. If you haven't already done so, follow the steps to set up the cluster's kubeconfig configuration file and (if necessary) set the KUBECONFIG environment variable to point to the file. Note that you must set up your own kubeconfig file. You cannot access a cluster using a kubeconfig file that a different user set up. See Setting Up Cluster Access. Then in a terminal window, enter kubectl get nodes to see the public IP addresses of worker nodes in node pools in the cluster. Using the Console. In the Console, display the Cluster List page and then select the cluster to which the worker node belongs. On the Node Pools tab, click the name of the node pool to which the worker node belongs. On the Nodes tab, you see the public IP address of every worker node in the node pool. Using the REST API. Use the ListNodePools operation to see the public IP addresses of worker nodes in a node pool. 2- In the terminal window, enter ssh opc@ to connect to the worker node, where is the IP address of the worker node that you made a note of earlier. For example, you might enter ssh [email protected]. Note that if the SSH private key is not stored in the file or in the path that the ssh utility expects (for example, the ssh utility might expect the private key to be stored in ~/.ssh/id_rsa), you must explicitly specify the private key filename and location in one of two ways: Use the -i option to specify the filename and location of the private key. For example, ssh -i ~/.ssh/ my_keys/my_host_key_filename [email protected] Add the private key filename and location to an SSH

configuration file, either the client configuration file (~/.ssh/config) if it exists, or the system-wide client

configuration file (/etc/ssh/ssh_config). For example, you might add the following:

Host 192.0.2.254 IdentityFile ~/.ssh/my_keys/my_host_key_filename

For more about the ssh utility's configuration file, enter man ssh_config Note also that permissions on the

private key file must allow you read/write/execute access, but prevent other users from accessing the file.

For example, to set appropriate permissions, you might enter chmod 600 ~/.ssh/my_keys/

my_host_key_filename. If permissions are not set correctly and the private key file is accessible to other

users, the ssh utility will simply ignore the private key file.

Questions 6

In the sample Kubernetes manifest file below, what annotations should you add to create a private load balancer In oracle Cloud infrastructure Container Engine for Kubermetes?

A. service.beta.kubernetes.io/oci-load-balancer-private:"true"

B. service.beta.kubernetes.io/oci-load-balancer-private: "true" service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1..aaaaa....vdfw"

C. service.beta.kubernetes.io/oci-load-balancer-internal: "true"

D. service.beta.kubernetes.io/oci-load-balancer-internal: "true" service.beta.kubernetes.io/oci-load-balancer-subnet1: "ocid1.subnet.oc1..aaaaa....vdfw"

Buy Now

Correct Answer: D

https://docs.cloud.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingloadbalancer.htm? TocPath=Services%7CExample%2 0Network%20Resource%20Configuration%7CUpgrading%20the% 20Version%20of%20Kubernetes%2 0Running%20on%20a%20Master%20Node%7C_____2 Creating Internal Load Balancers in Public and Private Subnets You can create Oracle Cloud Infrastructure load balancers to control access to services running on a cluster: When you create a 'custom' cluster, you select an existing VCN that contains the network resources to be used by the new cluster. If you want to use load balancers to control traffic into the VCN, you select existing public or private subnets in that VCN to host the load balancers. When you create a 'quick cluster', the VCN that's automatically created contains a public regional subnet to host a load balancer. If you want to host load balancers in private subnets, you can add private subnets to the VCN later.

Alternatively, you can create an internal load balancer service in a cluster to enable other programs running in the same VCN as the cluster to access services in the cluster. You can host internal load balancers in public subnets and private subnets. To create an internal load balancer hosted on a public subnet, add the following annotation in the metadata section of the manifest file: service.beta.kubernetes.io/oci-load-balancer-internal: "true" To create an internal load balancer hosted on a private subnet, add both following annotations in the metadata section of the manifest file: service.beta.kubernetes.io/oci-load-balancer-internal: "true" service.beta.kubernetes.io/oci-load-balancersubnet1: "ocid1.subnet.oc1..aaaaaa....vdfw" where ocid1.subnet.oc1..aaaaaa....vdfw is the OCID of the private subnet.

Questions 7

How can you find details of the tolerations field for the sample YAML file below?

A. kubectl list pod.spec.tolerations

B. kubectl explain pod.spec.tolerations

C. kubectl describe pod.spec tolerations

D. kubectl get pod.spec.tolerations

Buy Now

Correct Answer: B

kubectl explain to List the fields for supported resources

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#explain

Questions 8

A leading insurance firm is hosting its customer portal in Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes with an OCI Autonomous Database. Their support team discovered a lot of SQL injection attempts and cross-site scripting attacks to the portal, which is starting to affect the production environment. What should they implement to mitigate this attack?

A. Network Security Lists

B. Network Security Groups

C. Network Security Firewall

D. Web Application Firewall

Buy Now

Correct Answer: D

Oracle Cloud Infrastructure Web Application Firewall (WAF) is a cloud-based, Payment Card Industry (PCI) compliant, global security service that protects applications from malicious and unwanted internet traffic. WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer's applications. WAF provides you with the ability to create and manage rules for internet threats including Cross- Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit based on geography or the signature of the request.

Questions 9

You are building a container image and pushing it to the Oracle Cloud Infrastructure Registry (OCIR). You

need to make sure that these get deleted from the repository.

Which action should you take?

A. Create a group and assign a policy to perform lifecycle operations on images.

B. Set global policy of image retention to "Retain All Images".

C. In your compartment, write a policy to limit access to the specific repository.

D. Edit the tenancy global retention policy.

Buy Now

Correct Answer: D

Deleting an Image When you no longer need an old image or you simply want to clean up the list of image tags in a repository, you can delete images from Oracle Cloud Infrastructure Registry. Your permissions control the images in Oracle Cloud Infrastructure Registry that you can delete. You can delete images from repositories you've created, and from repositories that the groups to which you belong have been granted access by identity policies. If you belong to the Administrators group, you can delete images from any repository in the tenancy. Note that as well deleting individual images , you can set up image retention policies to delete images automatically based on selection criteria you specify (see Retaining and Deleting Images Using Retention Policies). Note: In each region in a tenancy, there's a global image retention policy. The global image retention policy's default selection criteria retain all images so that no images are automatically deleted.

However, you can change the global image retention policy so that images are deleted if they meet the criteria you specify. A region's global image retention policy applies to all repositories in the region, unless it is explicitly overridden by one or more custom image retention policies. You can set up custom image retention policies to override the global image retention policy with different criteria for specific repositories in a region. Having created a custom image retention policy, you apply the custom retention policy to a repository by adding the repository to the policy. The global image retention policy no longer applies to repositories that you add to a custom retention policy.

Questions 10

Given a service deployed on Oracle Cloud infrastructure Container Engine for Kubernetes (OKE), which annotation should you add in the sample manifest file to specify a 400 Mbps load balancer?

A. service.beta, kubernetes. lo/oci-load-balancer-kind: 400Mbps

B. service, beta, kubernetes. lo/oci-load-balancer-value: 4 00Mbps

C. service . beta. kubernetes . lo/oci-load-balancer-shape: 400Mbps

D. service . beta . kubernetes . lo/oci-load-balancer-size: 400Mbps

Buy Now

Correct Answer: C

The shape of an Oracle Cloud Infrastructure load balancer specifies its maximum total bandwidth (that is,

ingress plus egress). By default, load balancers are created with a shape of 100Mbps. Other shapes are

available, including 400Mbps and 8000Mbps.

To specify an alternative shape for a load balancer, add the following annotation in the metadata section of

the manifest file:

service.beta.kubernetes.io/oci-load-balancer-shape: where value is the bandwidth of the shape

(for example, 100Mbps, 400Mbps, 8000Mbps).

For example:

apiVersion: v1

kind: Service

metadata:

name: my-nginx-svc

labels:

app: nginx

annotations:

service.beta.kubernetes.io/oci-load-balancer-shape: 400Mbps spec:

type: LoadBalancer

ports:

-port: 80 selector: app: nginx https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md

Questions 11

What is the open source engine for Oracle Functions?

A. Apache OpenWhisk

B. OpenFaaS

C. Fn Project

D. Knative

Buy Now

Correct Answer: C

https://www.oracle.com/webfolder/technetwork/tutorials/FAQs/oci/Functions-FAQ.pdf Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a- Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs.

Questions 12

What is the difference between blue/green and canary deployment strategies?

A. In blue/green, application Is deployed In minor increments to a select group of people. In canary, both old and new applications are simultaneously in production.

B. In blue/green, both old and new applications are in production at the same time. In canary, application is deployed Incrementally to a select group of people.

C. In blue/green, current applications are slowly replaced with new ones. In < MW y, Application ll deployed incrementally to a select group of people.

D. In blue/green, current applications are slowly replaced with new ones. In canary, both old and new applications are In production at the same time.

Buy Now

Correct Answer: B

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. For this example, Blue is currently live and Green is idle. https://docs.cloudfoundry.org/devguide/deploy-apps/blue-green.html Canary deployments are a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers. ... Canaries were once regularly used in coal mining as an early warning system. https://octopus.com/docs/deployment-patterns/canary-deployments

Questions 13

Which is NOT a supported SDK on Oracle Cloud Infrastructure (OCI)?

A. Ruby SDK

B. Java SDK

C. Python SDK

D. Go SDK

E. .NET SDK

Buy Now

Correct Answer: E

https://docs.cloud.oracle.com/en-us/iaas/Content/API/Concepts/sdks.htm

Exam Code: 1Z0-1084-20
Exam Name: Oracle Cloud Infrastructure Developer 2020 Associate
Last Update: Jun 13, 2025
Questions: 72

PDF (Q&A)

$45.99
ADD TO CART

VCE

$49.99
ADD TO CART

PDF + VCE

$59.99
ADD TO CART

Exam2Pass----The Most Reliable Exam Preparation Assistance

There are tens of thousands of certification exam dumps provided on the internet. And how to choose the most reliable one among them is the first problem one certification candidate should face. Exam2Pass provide a shot cut to pass the exam and get the certification. If you need help on any questions or any Exam2Pass exam PDF and VCE simulators, customer support team is ready to help at any time when required.

Home | Guarantee & Policy |  Privacy & Policy |  Terms & Conditions |  How to buy |  FAQs |  About Us |  Contact Us |  Demo |  Reviews

2025 Copyright @ exam2pass.com All trademarks are the property of their respective vendors. We are not associated with any of them.