Final Questions Flashcards

1
Q

What is product-based software engineering?

A

In the first place the development was lead by Customer that had a problem, from this problem they generated requirements with the developers, this was a bloody part, and then the software was implemented. Nowadays the developers sees an opportunity , that inspires Product features for developer, they then implemented a software their realises the opportunity. Product vision for a product started from a series of questions: WHO, WHAT, WHY that could be enlarged in For target customer, Who need , The name of the product that reason to buy , unlike competitive alternatives, our product main difference. All tat lead to a product vision that derived from Domain experience, product experience, customer experience, prototyping and playing around

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which are the key Scrum practices?

A

The product backlog is a list of to do, of feature that must be done to complete the development of the actual product. Hold the items inside this list are called o’clock items. The items are very different such as a feature to be implemented, users requests, engineering improvements. The product backlog must be prioritised in a way such that the most urgent and important feature must be on the top of this list. Also there is a state of all the items inside the backlog. That is ready for consideration, ready for refinement ready for implementation.
There are some activities related to the product backlog. When is our refinement were existing product backlog items are refined it to create more detailed product backlog items. Estimation were the estimated amount of work to implement each product backlog item. Creation when you’re high items are added to the backlog. And then prioritising where all items are ordered to take the new circumstances into account.
The metrics related to the product backlog are the effort required that is estimated eight hours or days of work, of course, different people can work on the same item to achieve faster results. Also, there are the story points the toilet is the two sides of the task, the complexity of the technologies and the unknown characteristics. These points are chosen by the team in relation to other product backlog items.
Time box sprints, the products are developed during two - four weeks activities that delivers an increment of the product , sprint stops also, if the work has not been completed. Activities of sprint are planning where the work items are selected items are refined if necessary. This should not last more than a day sprint execution where the product backlog items are implemented, but this part cannot be extended. Also if the work isn’t finished. Sprint reviewing all the work done is reviewed by the team and possibly also the stakeholders to check what went wrong during the process.
The ideal scum, two sides is between five and eight people, decides he is useful because the team who is large enough to be diverse, but small enough to communicate informally and effectively. Each person takes responsibility for the work, so people can join and leave without problems team. A good communication means that the people in the team will learn about each other Areas. The external interactions heart cured by scrum master, if their team focused a external interactions, and for the product focused external interactions, the product owner. To report the progress and organise the work. There is the project manager took this responsibility.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are personas, scenarios, user stories and features?

A

We can use it as a representation letter called personas, and natural language scenarios stories to identify the product features. The personas are the target user for our product, we have to identify from one to 5 personas to get the key product future. We have personalisation, job-related relevance and education feature related to a persona. Then we have the scenarios where our Persone is using our product feature to do something. Then, from the scenarios that are high level stories, we can get the user stories that are formulated in the following way, has a, I want to do something, so that reason. The feature identified must be independent, coherent, and relevant. The future can be extracted from user knowledge by scenarios have user service, from product knowledge from holder products to provide fundamental functionality, the main knowledge from the area where the product you support, and in particular to do what they want him innovative way. Or technology knowledge that can lead to new features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the role of non-functional quality attributes and decomposition in a software architecture?

A

They are the attributes of responsiveness, so if the system returns are in a reasonable time, reliability, if the system behave as expected, availability the system can deliver its services when requested by users, security system can protect itself from unauthorised attacks, usability can the system access the features quickly and without errors, maintainability can the system be updated with a new feature without effort, resilience, can the system continue to work after partial failure. The optimisation of some non functional attributes can affect others, for example an increase of security can lead to performance usability issues. That system can be decomposed into services that are coherent unity of functionality, components that are softer units, offering, one or more services, and module which is a set of components. A large number of components can increase the complexity of the system. To control the complexity of a system we have to separate the concerns into components focused on a single concern, create stable interfaces coherent that changes slowly and implement once functionality. We can have layered architectures, where each layer has a concern, and each layer doesn’t know the implementation of each layer, we have also cross cutting concerns like security, performance and reliability that add interactions between layers. The basic layers if an application web are, were or mobile user interface, auth and ui managements, application functionality , basic shared services, database and transactions management.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a distribution architecture?

A

A distribution architecture refers to the way in which a software system is divided into separate components that run on different machines or devices, and communicate with each other over a network. We can define servers and allocation of components to servers, we can have a client server architecture in which clients access a shared db and business logic are performed on those data figure 1 is an example of client server architecture. Also is used model view controller pattern where each view register to a model. The client server communication usually uses http and Xml/json. It can be multi tier or service oriented, in the first and second all the clients contact a web server, then in the first case the application server is contacted and then the database server, in a service-oriented gateway is contacted and then the gateway will contact each service required. When an application is distributed we have to put the components that change with the same frequency in the same services, also is important to avoid distribution of data, and if so its better to manage the problems derived from the distribution. We have to chose if use the cloud and so service oriented architecture, for a scaling system or local server and a multi tier architecture .

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which are the main features of Enterprise Integration Patterns?

A

Enterprise applications are composed by heterogenous services, uses various data types, have different participants, all connected via network, all are complex distributed multi service the problem is how to integrate, all this different applications using a button that he’s an eye level abstraction of solution. And enterprise integration, pattern he is reusable obstruction of the solution to well-known problems to integrate software components that forms enterprise applications. We have the message that he is a piece of data sent from the service to another is composed by an header and a body. So we have the channels that gives the possibility to communicate between applications, can be synchronous and asynchronous. The channels can be Point to point or publish subscribe. Usually the application doesn’t know the messaging system so there are you seat adapters to send messages to the channels and messaging points to reach or send messages. Thanks to the channels. We can have different type of message, so we need message translator. We use pipes and filters to be the architecture, the messages are passed into filters and then are sent thanks to the pipes that connect applications. We could have a content enricher that that’s so information to message. a message Router that thanks to the contacts base router read messages based on message type or message content or Thanks to the context so information from a central configuration. Also we have message filtering that filters a message. Also we have a router that can road tax for the content or in a recipient list. Normaliser, that translates a message in a common data format. Then we have the splitter that breaks down a composite message into a serious of it. He got a message and then aggregator that collect these messages.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which are the technology choices that affect a software architecture?

A

DATABASE, PLATFORM, OPEN TOOL, DEVELOPMENT TOOLS, SERVER.
When we create a software architecture we have to choose the technologies, the database if we use a SQL or noSql db, platform if the app is a mobile or web app, server if is on in house serve or cloud, open source technologies, Development tools can they limit your architectural choices. The data base can be relational if you need transitions or no SQL. If we have more flexible data and data can be organised hierarchal. Delivery platform is also really important and we can have a problems with mobile products like processor power power management. The server to run our application can be cloud for consumer product for business it can be more useful to not use cloud because concerned about security. The closure can you plant the architecture of the product for example many development framework use model view controller, or some technologies can influence the database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Which are the differences between multi-tenant and multi-instance SaaS systems?

A

Multi tenant means a single database schema shared by all the system users, the items in the database are tagged with tenant identifier to provide a logical isolation. Advantages are about the resource utilisation, because the software can use the resources to effectively use the resources. The security can be also improved because there is only one db to be patched. Th updates are also easier. Cons are about the inflexibility of use the same db, security in case of leak, complexity is more then multi instance. The multi instance system on the other hand is simpler in opposition of multi tenant , avoid concerns about data leaks. It can be implemented based on VM, so the software instance and db run in its own VM, all users from same customer may access the shared system db. Container based each user has an isolated version of software and db running in a set of containers, is most useful for product where each user work independently with little data sharing. The pros are about the flexibility each instance can adapt to the customer needs, there is no possibility of leaks of all customer, the scalability is simpler, if error for a customer others can continue to work. The cons are the major cost and the update management. The organisation of the db can be a key factor in choosing the right db. Target customer have security concerns about the db sharing -> use a multi instance;. If transactions and data consistency are needed or multi tenant or VM based multi instance. Big dbs are better for multi tenant, where can be optimized. If system is a service oriented use multi instance database.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does the CAP theorem tell us?

A

In presence of network partition you can not have both availability and consistency. The consistency is defined as any read operation after a write, must contain that write changes. Arability every request from a non failing node must result in a response. Network partition the network can lose arbitrarily many messages sent from one group to another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which are the main pros, cons and characteristics of microservices?

A

The micro services are small scale stateless services with a single responsibility, independent one from the other, possibile to redeploy without changing or stopping other services. Self contained no external dependencies, light weight protocol to comunicate , independent implementation like technologies, independent deployable and business oriented. The pros are about the short time to create new features and update, also to scale effectively, quick restart without affecting other services, service replica quickly deployable. The cons are about the complexity of the system that will increase dramatically, also the microservices are dependent to the network that can reduce the response time between the services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can we feature authentication and authorization in a software product?

A

The objective of the authentication is to be sure that an user is who he claims to be, can be performed in 3 ways. Knowledge based with a password for example, possession based like a confirm code on the smartphone, attribute based auth with biometric attribute of the user. Password based auth can lead to problems, like user that forgot password or use the same password, to overcome this the password can be forced to be safe or can be used for forgotten passwords knowledge based auth like a question answer. Its good to use two stage auth only if there are confidential information. A secure auth system is difficult, also if an oauth is used, can be used a federate identity, like google. Authorization is a check if a particular user can access to some resources. Access control lists are used to check what kind of users can access to a particular resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is static/dynamic vulnerability analysis?

A

Static vulnerability analysis is a type of white box analysis that has full access to the source code. It uses static analysis techniques to find security vulenrabilities that are caused by the code itself( e.g hardcoded secrets, old libraries with known vulnerabilities, bad crypto practices). Dynamic vulnerability tesating is a black box analysis. It tries to break the security control and find vulenrabilities by calling multiple applications API endpoints. Its purpose is to find bad designed authentication and authorization policies by exploiting a running application behaviour. IT casn find vulnerabilities such as no CSRF token, XSS, code injection problem, security misconfigs, unneccessary data exposusre ecc..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a workflow net?

A

Extension of petri nets. Petri nets consists of places, transitions and direct arcs connecting places to transitions. Transitions model activities, places and arcs model execution constraints. System dynamics represented by tokens, whose distribution over the places determines the state of the modelled system. A transition can fire if there is a token in each of its input places. If a transition fires, one token is removed from each input place and one token is added to each output place.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a sound/live/bounded net?

A

A Petri net is a workflow net iff: 1. There is a unique source place, with no incoming edge 2. There is a unique sink place, with no outgoing edge 3. All places and transitions are located in at least one path from the initial place to the final place What is a sound workflow net: A workflow net is sound iff: 1. every net execution starting from the initial state (one token in the source place, no tokens elsewhere) eventually leads to the final state (one token in the sink place, no tokens elsewhere) 2. every transition occurs in at least one net execution What is a live/bounded petri net?: A Petri net (PN, M) is live if and only if for every reachable state M’ and every transition t, there is a state M’’ reachable from M’ where t is enabled. A Petri net(PN, M) is bounded if and only if for each place p there is a n in N such that for each reachable state M’ the number of tokens in p in M’ is less than n Theorem: a workflow net 𝑁 is sound if and only if (N’,{i}) is live and bounded, where N’ is N extended with a transition from the sink place o to the source place i

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is Camunda?

A

Camunda is a framework supporting BPMN for workflow and process automation. It provides a RESTful API which allows to use any language Workflows are defined via BPMN and can be graphically modelled using Camunda Modeller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Locust?

A

Locust is a open source load testing python tool. It provides a python library and a simple web interface to generate a various number of API calls to stress test an application. A locustfile.py must be provided where some varius user api calls are defined. The number of these calls and the rate are dfined with the web interface

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Which are the two usage patterns of Camunda?

A
  1. endpoint-based integration: After defining a BPMN process , Camunda can directly call services via built-in connectors. It support REST and SOAP 2. queue-based integration Unit of work (tasks) are provided in a Topic Queue, the queue is polled by RESTful workers that can interact with services ( better scaling).
18
Q

What is the effect of docker build/run/commit/tag?

A

Docker build build a new image based on a specification written in a Dockerfile. Docker run execute a command in a new container created on the basis of a specified image, using the default comand in the image (if exists) when a command is not provided. Commit creates a new image based on a container change to the image that it was built on. Docker tag creates an alias for an image, so we don’t have to remember the exact id each time.

19
Q

Pros and cons microservices?

A

Service orientation:
– Application as sets of services
– each application has its own container
– lightweight communication protocols (REST), can be synchronous
(HTTP) or asynchronous(RABBITMQ, REDIS)
– Polyglot services
* Organize services around business capabilities
– Agile methods, cross funcitonal teams, flat set of servies managed by
many teams
– Different teams with separated roles introduces a delay of communications (context switching)
* Decentralized data
– Each service has its own db, which will be smaller
– Eventual consistency and compensations instead of distributed transactions
∗ We accept some inconsistencies, but they ill be consistent some
time in the future
* Independently deployed services
– Ideally each service should be started without any dependency,
should reduce coupling as much as possible
* Horizontal scalability
– replicate only services that actually needs the scaling, not the entire
application
∗ Must be careful when dealing with endpoint based communication with other service (this is a smell, should be addressed with
service discovery or a message router)
* Fault resilient services
– Avoid cascading failures
– Must have fault tolerant design
– Any call can fail for any reason, must handle these as graceful as
possible
– Design for failure (chaos testing, fault injection)
* DevOps culture, you build it, you run it
CONS:
* Dont even consider microservices unless you have a system that’s too
complex to manage as a mnoloith
* Communication overhead
* Architecture complexity
* “wrong cuts”, maybe you split it wrong and two services are tighlty coupled, very empirical process
* Very hard to avoid data duplication
* security managment very complex, attack surface broa

20
Q

Which are the main challenges in securing a microservice?

A

The main challenges in securing microservices are embedded in the architecture itself. Since we have many services communicating with remote the number of entry points increases (broader surface attack) and the app is as secure as the weakest link. Other challenges: * Distributed security screening: each microservice has to carry out independent security screenin: – May need to connect to a remote security token service – repeated, distributed security checks affects latency and performance Work around: trust-the-network (industring moving to 0 trust policies) * Bootstrapping trust among microservices: Service to service communicatin must take place on protected channels. Suppose you are using certificates: – each microservice must be provisioned with a certificate (and private key) to authenticate itself to another microservice during interactions – Recipient microservice must know how to validate the certificate associated with calling microservice – Need to bootstrap trust – (need also to revoke and rotate certificates) Need automation for large scale deployments * Tracing requests spanning multiple microservices A log records an event in a service. A set of logs can be aggregated to produce metrics Traces help you track a request from the point where it enters the system to the point where it leaves the system Challenging to correlate requests among microserives Containers complicate credentials/policies handling Containers are immutable servers that donìt change state after spin up But we need to mantain a dynamic list of allowed clients and a dynamic set of access control policies e.g get updated policies from some policy admin endpoint (push vcs pull model) Each service must also mantain its own credentials, which need to be rotated periodically e.g keep credentials in container filsystem and inject them at boot time * Distribution makes sharing user context harder. User context has to be passed explicitly from a microservice to another. How can we build trust so that a receiving microserrvice accepts an incoming user context? Popular solution: use Json Web Token * Decentralised security responsabilities Diffenrent teams can use different technlogoy stacks, and this can mean that they use different security practices and tools (i wish there was a unique solution) for static and dynamic analysis. Security responsabilites distributed across different teams. Usually hybrid approach with centralized security team

21
Q

What is Docker Compose?

A

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

22
Q

What is a Camel route (for)?

A

A route in Apache Camel is a sequence of steps, executed in order by Camel, that consume and process a message. A Camel route starts with a consumer, and is followed by a chain of endpoints and processors. .So firstly, a route receives a message, using a consumer – perhaps from a file on disk, or a message queue. Then, Camel executes the rest of the steps in the route, which either process the message in some way, or send it to endpoints (which can include other routes) for further processing. In Apache Camel, a route is a set of processing steps that are applied to a message as it travels from a source to a destination. A route typically consists of a series of processing steps that are connected in a linear sequence.

A Camel route is where the integration flow is defined. For example, you can write a Camel route to specify how two systems can be integrated. You can also specify how the data can be manipulated, routed, or mediated between the systems.

The routes are typically defined using a simple, declarative syntax that is easy to read and understand.

For instance, you could write a route to consume files from an FTP server and send them to an ActiveMQ messaging system. A route to do so, using Java DSL, would look like this:

from(“ftp:myserver/folder”)
.to(“activemq:queue:cheese”);

23
Q

What is Minikube?

A

Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems

24
Q

How locust can be used to identify bottlenecks?

A

Thanks to the load tests, it create an instance if the component and stress test by mocking a different amount of workload.

25
Q

How does K8s control plan work?

A

We have in a cluster two types of machines: master nodes that contains most of control plane components and worker node that runs the application workloads. User gives a new or updated specification object to API server of master node, the API server validates and acts as unified interface for questions about cluster current state. State of cluster is stored in a distributed key value store etcd. The components of K8s control plane are scheduler asks api which object is free, and choose to which object give the work, then the API server reflect this decision.The controller manager monitors the cluster state though the API server, if there are differences with the desired state, the API server will intervene. Kubelet acts as a node agent which communicates with the api server to check which container workloads have been assigned to the node, is responsible for start the pods to run the assigned workloads, when a new node join the cluster API server is informed.

26
Q

What is test automation?

A

Executable tests that check if the software return expected results. Test structured in 3 parts arrange, action and assert. The test must be as simple as possible and must be reviewed. Unit test are the most diffuse 70 %, feature test 20 %, system testing 10 %.

27
Q

What is functional testing?

A

A large set of program tests that are executed on all the code at least once. The testing starts on the first day of coding. Its composed by unit testing that identifies equivalence partitions and test them in isolation. Then there is the feature testing that tests if a feature is working and implemented as expected. System testing checks if the system has unwanted interactions between the features and then release testing tests in the real env the system before the release.

28
Q

What is the effect of a git add/branch/clone/checkout/push/commit/pull command?

A

Git add: Adds a file to the staging area, which is a temporary holding area where changes can be reviewed before being committed to the repository. This allows you to include only the changes you want in a commit. git branch: Creates a new branch in the repository. Branches allow you to work on different versions of your code simultaneously and merge the changes back into the main branch (usually called “master”) when you’re ready. git clone: Makes a copy of an entire repository and downloads it to your local machine. This is usually used to obtain a local copy of a repository that is hosted remotely, such as on GitHub. git checkout: Switches to a different branch or restores files in your working directory to a previous version. git push: Sends local commits to the remote repository. This is used to share your changes with other collaborators or to make your changes available online. git commit: Saves changes to the local repository. When you make a commit, you should include a commit message that describes the changes you have made. git pull: Downloads new commits and merges them into your local repository. This is used to synchronize your local repository with the remote repository and incorporate changes made by other collaborators

29
Q

Which are the patterns of unit/load tests?

A

Test edge cases
Force errors
Fill buffers
Repeat yourself
Overflow and underflow
Don’t forget null and zero
Keep count
One is different
If your partition has upper and lower bounds (e.g., length of strings, numbers, etc.), choose inputs at the edges of the range.
Choose test inputs that force the system to generate all error messages. Choose test inputs that should generate invalid outputs.
Choose test inputs that cause all input buffers to overflow.
Repeat the same test input or series of inputs several times.
If your program does numeric calculations, choose test inputs that cause it to calculate very large or very small numbers.
If your program uses pointers or strings, always test with null pointers and strings. If you use sequences, test with an empty sequence. For numeric inputs, always test with zero.
When dealing with lists and list transformations, keep count of the number of elements in each list and check that these are consistent after each transformation.
If your program deals with sequences, always test with sequences that have a single value.

29
Q

What is DevOps automation?

A

Everything that can be should be automated: Continuos integration: install db software and set up, load test data, compile files, link compiled code with libraries, check external services, move configuration files, run system tests. An integrated version of the system is pushed to the shared code repository, is important to not break the build. Incremental build only on the files changed, continuous reasoning. Continuous delivery build a version of the software ready to be in prod, continuous deployment push this version in prod without manual commands. Infrastructure as code automate the process of updating software on servers, using machine readable code.

30
Q

What is Jenkins?

A

Jenkins is a continuous integration (CI) and continuous delivery (CD) platform that can be used to automate various aspects of the software development process, including building, testing, and deploying code changes. One way in which Jenkins can exploit Git is by using it as a source control system to manage and track code changes.

31
Q

How does Jenkins exploit Git?

A

When Jenkins is configured to use Git, it can pull code changes from a Git repository and automatically build, test, and deploy the code as part of the CI/CD process. This allows developers to commit their code changes to the Git repository and have Jenkins automatically run the necessary build and test steps, potentially even deploying the code to production if it passes all the required tests.

In addition to using Git as a source control system, Jenkins can also use Git to manage and trigger builds. For example, Jenkins can be configured to poll a Git repository at regular intervals to check for new code changes. When it detects a change, it can automatically trigger a build and test the code, allowing developers to see the results of their changes in near real-time.

Overall, Jenkins can exploit Git by using it to manage and track code changes, as well as to trigger builds and tests as part of the CI/CD process. This can help streamline the development process and improve the speed and reliability of code releases.

32
Q

How can K8s or Swarm resolve architectural smells?

A

Docker compose deployment The overlay network DNS of Docker acts as a dynamic service discovery for running service instances. The Deployment handles the replica set and the Service introduces a message router among instances.

33
Q

What is an explainable failure root cause analysis?

A

When a failure occurs, it can be the start of cascading failure, so its crucial to find the source and give an explanation.We must know that the correlation doesn’t mean the cause.The explanation permit to intervene on the failing service and only on the services failing in cascade. We must define a causal relation between events, to do this is useful to use logged events with id of service, timestamp, logged event, severity. We can have recursive cases that are cause of the error: internal error of the invoked service, failed interaction, time out, unreachability of a service invoked by service instance, unreachability of invoked service instance. Base cases internal service error, temp service unreach, un started service

34
Q

How does GitHub flow work?

A

We can create brach with a feature from master, commit changes submit pull request, discuss and add commit, merge features into master.

35
Q

What is a parallel/exclusive/inclusive gateway in BPMN?

A

Parallel are more branches activated, all must be completed, exclusive only one of the branches must be activated, inclusive one or more branches are activated.

36
Q

Which are the problems of and solutions for application management in the Cloud-Edge Continuum?

A

Deployare applicazioni in un QoS, servizio che assicura qualità della connessione, determinata latenza é challenging
Abbiamo problemi sia dei requisiti delle applicazione che costruiamo
Possono essere, software, hardware oppure di QoS
Possono essere dell’infrastruttura stessa, eterogenea, dinamica, larga.
Quattro dimensioni che permettono di gestire questi applicativi
Qos
Costi
Sicurezza
Consumo di risorse
Problemi del fog computing
Come inserire applicazione nel cloud edge continuum
ML
Facciamo decidere al machine learning
Problema spiegazione del perché facciamo una cosa e le infrastrutture sono molto dinamiche
MILP
Framework per gestire sistemi che consumano energia
Lento e difficile da leggere
Declarative
Dichiariamo quali sono dei posti buoni per services, se rispettano determinate caratteristiche
Inference engine penserà poi ad effettuare le verifiche
Come effettuare management di un’app dopo che viene deplorata
Management va effettuato al primo deployment dell’infrastruttura, ma anche quando ci sono errori, disconnessioni, congestioni
Inserire S service negli N nodi del cloud per garantire Qos e di rispettare requirements , é np-hard problem e richiede tempo esponenziale nel peggiore dei casi.
Utilizzare continuous reasoning
Ossia prendiamo solo last changes
Questo permette di
Ridurre il numero di managment operations stop, undeploy, deploy, start
Ridurre il tempo per effettuare decisioni a runtime
Per raggiungere questo obbiettivo, consideriamo migrating solo services che sono interessati dai cambiamenti dell’infrastruttura
Tramite logic programming
FogBrain
Dichiarativo
Facile da capire, conciso
Explainable
riesce a prendere le spiegazioni del perché prende determinate soluzione basando su Prolog
Scalabile
Riesce a ridurre size dei problemi instance concentrando problematiche solo sui servizi che hanno realmente problemi
Dichiarazione dei requisiti delle applicazioni
Service(serviceId1, SwRequirements, HwRequirements1, TRequirements1 <- Iot connections requirements )
Service(serviceId1, SwRequirements, HwRequirements1, [ ]<- non ha Iot connections requirements )
s2s(serviceId1, serviceId2, LatReq12 <- latenza, BWReq12 <- Banda) <- sono asimmetrici
node(nodeId1, swaps1, hwcaps1, Tcaps1)
link(nodeId1, nodeid2, featLat12, featBw12)
FobBrain Reasoning
First deployment
Cerca tra tutti i nodi quali hanno hw, software e Iot requirements
Dopo verifica laze e banda cumulativa di tutti i services comunicanti con S
Ripete finche tutti i services non sono stati placed
Management decisions, migriamo services da un nodo all’altro
Problemi di deployment, poiché nodo dove vogliamo caricare service non riesce a deployare
Problema di un nodo non riesce a soddisfare app requirements oppure é crashato
Problemi di comunicazione con altri services
Comunicazione overloaded, non riesce a soddisfare latenza/banda o non é disponibile
Se trova problemi crea partially ground query per determinare nuovo nodo
Complessità sempre O(N^S) ma casistiche più comuni
1 service che deve migrare O(N)
2 service per problemi di connessione O(Nˆ2)
5000 volte più veloce per un service di farlo da scratch o 60 x per 2 service

37
Q

What is the incremental development and delivery advocated by Agile?

A

Incremental development is a process that sorts from product features list, and from this list feature is chosen to be included in the next increment, there is a list of feature that is has to be introduced in the product during the next increment. When a feature is chosen, all the team members must have the same understanding about the feature implementation. Then the feature is implemented and tested. Then all the system is tested to check if the feature is working into the system. Then the system is deliberate, and if all the feature is working correctly, have enough feature has been implemented, the system is released.
In agile software development, incremental delivery refers to the practice of delivering small, incremental pieces of functionality to the customer at regular intervals. This allows the customer to see the progress of the project and provide feedback at each stage, rather than waiting until the end to see the final product. Incremental delivery also allows the project team to get early feedback on their work and make adjustments as needed. This approach helps to reduce risk and improve the chances of success for the project.

38
Q

Which refactoring can be applied to resolve architectural smell X?

A

done

39
Q

Which are the most frequent API security vulnerabilities and how can we prevent them?

A

Done