Day 1 Registration
Day 1 Registration
|Keynote: Track 1|
|Track 1||Track 2||Track 3|
|Track 1||Track 2||Track 3|
|Track 1||Track 2|
|Track 1||Track 2||Track 3|
|Track 1||Track 2||Track 3|
|Track 1||Track 2||Track 3|
Day 2 Registration
Day 2 Registration
Description coming soon...
As enterprises (and smaller companies) continue to shift focus and (hopefully) continues to adopt the “right software technology for the right project” approach, being an engineer versed in multiple languages helps! We will look into two trending domains where this is particularly evident: Internet of Things and Distributed Ledgers (a.k.a. Blockchain-like solutions). After some background on the state-of-the-art in these areas, I will try to passionately convince you why even limited knowledge of several programming languages will be key for your continued engineering success, and also provide some directions for further exploration. * In spite of its title, this session does not contain randomly jumbled-together words.
My quest for the talk is to introduce you to gamification with the founding principles that allow us to achieve success by adopting the process and an end-to-end example of gamifying a familiar process at work. Since some of the top global companies are already employing it as a technique for their business operations why should YOU wait? Gamification goes beyond collecting stickers for some discounted items at the local gas station or stamps for a free latte. Since gamification is the convergence of game design and user-centered design we will take on a journey that touches on these two rather familiar concepts to explain what is yet to be understood and applied to break the chains of standard business processes.
Quality products are result of quality processes. 1. Introduction to CMMI - Historical background - What is CMMI about - CMMI Insight: process areas, goals and practices, maturity levels - Where, when and how to use CMMI 2. CMMI and Agile - High-level comparison - CMMI for processes improvement - Some “gaps” in Agile from CMMI perspective - CMMI benefits - Which way to go? 3. Suggested “gaps” to be addressed - Documentation - Traceability 4. Summary
Some people consider microservices as a specialization or extension of Service-Oriented architecture (SOA) others others are on the opinion that microservices architecture is completely new way of developing software. No matter how we approach the microservice architecture the benefits are clear - modular and easy to change services responsible for specific functionality with well specified boundaries and contracts. Easier and quicker to develop, much easier to distribute among teams, easier to scale, upgrade and provide patches with optimize DevOps cycles. Not everybody, however, talks about the pitfalls of distributed systems becoming a norm with the raise of microservices based architectures. The list is quite long including messaging failures, latencies and timeouts, achieving consensus (e.g. leader elections), persistent and replicated state, consistency and distributed transactions, monitoring, tracing and troubleshooting as part of the DevOps cycles. Very vibrant communities in the industry with unprecedented until now speed and variety of solutions are trying to overcome those pitfalls and issues coming up with languages like Go, newer and lightweight runtime systems and packaging formats like Containers. That raises the need for a solution to another hard problems to solve - the resource scheduling and optimization of massively scaleable workloads placement. This talk will follow the progress in that space and focus on what modern Container cluster management system should provide as well as present an overview and current trends in a typical architecture of Container scheduling systems.
Why 80% of StartUps fail? One of the main reasons is the lack of Usability reviews. Is the software you are testing so complicated or maybe outdated, or the design is not following the best practices? What are the main differences between the biggest players on the market from design and usability perspective? Learn techniques and best practices about usability reviews of software, become Usability expert and don't let your users get tripped out while using your software. Key takeaways of the presentation: Understand how to conduct Usability reviews. A structured means of examining the usability of an interactive system by evaluating it against a set of recognized usability best practice principles. Reviews are usually carried out by usability experts with a bit of know-how and a good set of guidelines anyone can have a go. Join and you will have a glance of usability review processes, checklists, and reports. This will help you to negotiate a better looking and sellable software.
Agile is feature oriented and delivering product functionalities is the main goal for every agile team. One tricky question is how to deal with all non-functional requirements, which are hardly mentioned in the usual Agile practices. In this session we will learn how to manage quality attributes, how to measure them and how to make the team as focused in them, as it is focused in delivering new features. As we will not going through Agile/Scrum fundamentals , this talk will be suitable for every PM, Product Owner, QA or Software engineer already having at least basic background and knowledge in the Agile area.
The time it takes a person to click or tap a target on the screen depends on the size of and the distance to the target. This is Fitts' law. There are many other factors that define how humans perform with a piece of software or information. Have you heard of Hick, Nielsen, Zipf, Occam, Krug, Norman? We will explore various laws and principles that apply to soft performance. We will talk about making choices, the effect of practice, number of clicks, user temporary blindness or tunnel vision, and more. We will look into examples - both good and bad. As usual, there will be practical pieces of advice to take home and use when designing and testing.
With technology becoming ever more complex, Developers as well as Quality Assurance Engineers have stepped up their game in order to assure the safety of online users around the world. Different concepts such as phishing filters, ad-blockers, malware scanners, and other intricate software tools and development practices have been developed over the years in order to facilitate in improving user safety online. As tangible parts of any software, we can give these a number-value and decide whether they are effective methods, allowing developers and QAs alike to pick and choose what they use as part of software. But not all aspects are tangible. The most uncontrollable aspect of any software is the user, and the users unlike the people who made the software, do not have a deep understanding of what potential security vulnerabilities a piece of software may have. This talk is designed to look at the more common mistakes that users make in order to jeopardize their safety, and objectively discuss on how to look at software from the perspective of a user with no experience in online safety. We will take a look at the social engineering aspects of "hacking", as well as the technological aspects that can confuse users and make them believe malware or other harmful software can be legitimate. The end goal is to have a set of guidelines that QAs and Developers can use at every point in the development process in order to minimize these loopholes, and add safe-guards for malicious intent.
Digital transformation (DX) is hot -- and if you're not doing it, your company will fall behind. This session outlines what it is, how it is applied to the industry, what practices you can apply to your company and what tools exist to help you through your journey.
The presentation is about the concepts and the ideas applied to achieve functional test automation that we believe is scalable for the vSphere client(the vSphere is the heart of the SDDC delivered by vmware). As a result we are able to execute 1000 tests for 15 minutes and we have reduced the BAT(build acceptance test) set execution from about 3 hours to 5 minutes. The talk goes through the challenges in the test automation development and its execution brought by the complexity of the product, the scale of the team and the business requirements. Then are pointed the main goals for the automation - fast test execution, continuous integration and low maintenance cost of test automation and configuration. Based on that are presented the set of concepts and solutions that lead us to massive test execution parallelism, testbed sharing and automatic provisioning - without the cost of increased test automation complexity. The solution is based on two set of tools - “step based workflow” and “test scheduling system”. The “step based workflow” provides test steps that are responsible for reverting the changes in the testbed, the concept of requesting resources used by the test and the allocation mode of the requested resources(shared or allocated) in order to execute the tests in parallel on shared environment. The “test scheduling system” is the tool that schedules and executes the tests. It analyzes the test run list and uses the data provided by each “step based workflow" test to analyze, schedule and execute all the tests in an optimal way. Although the use case is vmware specific I believe that the presented ideas and concepts can be applied in the automation for other software products. More details are available in the slides I have attached.
CrossLend is Europe’s debt capital securitisation platform, offering single-loan securitisation to a range of partners across the lending industry. The presentation will cover the transition of fintech from Industry 3.0 to Industry 4.0 and will share a real world example of innovation in financial services. You will also get a glimpse of a modern financial services company and its technology approach.
A novel approach to the software development, versioning and release process that enables software developers to increase the code quality, simultaneously reducing the deployment and release times, with focus on building next-generation resilient architectures. We will discuss: - Why TfsGit(1) is better than TFVC(2) . How to monetize the benefits; - An increase in modularity comes at the expense of an increase in the maintenance complexity of handling repeatable builds and overall system coherence (no breaking changes between the different components in the same release). What is the approach to keeping this cost manageable? - What about the human error factor in the whole process? Is there a necessity to invent, follow and validate complicated processes to tackle the whole release? - How are changes propagated to dependent components safely? - How to keep, manage, validate and apply different configuration to the same binaries, deployed on different environments? - Case study: A short demonstration of the discussed workflow: o Initiating a project; o Adding dependency; o Rolling out a release; o Making changes and releasing them. ----------------------------------- 1) Team Foundation Server Git – distributed version control system embedded inside Team Foundation Server 2) Team Foundation Version Control – centralized version control system.
Too often, when discussing test automation, we focus on how time-consuming it is to set it up. Indeed, there is a significant effort required to implement a framework, add test cases and maintain all that as requirements evolve. However, a very important component remains overlooked: the daily routine of monitoring test results, including detecting defects both in the automation framework or the product, logging them in a tracking system, and removing the false positives caused by known issues. On a practical level, this presentation will explore ways to make the framework familiar with the defects and then “teach” it to get the boring work done. It will also overwhelm the audience with obscure pop culture references, often a direct result from the fact that the author now enjoys an extraordinary amount of free time at work.
Have you ever needed in your web or mobile application something that can really make you stand out of the crowd? In this presentation you will learn how you can benefit from adding interaction and motion elements in your digital products and how this is important during the initial and the dev phases of the product lifecycle. Let me demonstrate to you how easy it is to integrate a supportable animation with JS, CSS and external libraries. ... and are you curious what is the link between the good old Mickey Mouse and the tendencies in the UX?
Humans evolve slowly. Anthropologists estimate that the modern human brain has developed over the course of the last 100,000 years (give or take a century). Technology (think carved bones and pointed sticks) has been evolving right along with us. Technological complexity in digital form, however, is new but its recent exponential growth has enabled it to quickly surpass our cognitive capabilities. Consider the invention of the transistor in 1947 and the vast array of complex technology that followed. Our tech is now so intricate, its ability to process information so great, that our perceptual and cognitive abilities pale in comparison. In fact, for the past 69 years we have been struggling to make sense of, keep pace with, and not get left behind by technological systems in which we are the limiting factor. Given the rate of human evolution we never stood a chance. Historically, we have attempted to bridge the gap between technological complexity and human capabilities by forcing people to adapt to the tech. Barring evolutionary leaps or limitless training budgets, this has been problematic. Alternatively, we can adapt the tech to us, such that the design exploits our native abilities. This is the raison d’être of User Experience. But how is UX “minding the gap”? I will argue that traditional UX methods result in marginal improvements and do not bridge the gap in any substantial way. To keep pace with technological evolution, we must adopt a systematic process of innovation. My presentation will describe this process and demonstrate how it is our only hope to create a future in which humans partner with technology rather than watch longingly as it disappears over the horizon…without us.
In this presentation I will argue that to have a successful project, one must first have a bonded and effective team. Process and delivery issues are often rooted in human relationships. I will emphasize on the often overlooked or even missing phase in existing team development models. I will share my personal experience in the software development industry in the past 10 years and will focus on common leadership challenges, such as building trust, taking responsibilities and risks, and going into the gray areas. I will share principles for successfully addressing those challenges and for taking your team to the next level. The whole presentation will include rich examples from the professional life in the IT industry, mixed with such from nature and the entertainment industry.
As a result of the technologies evolution the cloud concepts transformed into the mainstream approach for software development and deployment. The immediate response to the actively changing development environment is the altering of the conventional testing and now testing of innovative technologies happens using the Cloud infrastructure. So, what is the difference between the testing in the cloud and the testing "on the ground"? In this presentation you will also learn how to use the cloud offerings for achieving better and reliable results, with the examples for build and deployment pipelines provided by IBM Bluemix DevOps. Diana Dimova is a Senior QA in Musala Soft team that is part of Customer Innovation division in IBM. In her career she has been dealing with cutting-edge technologies, new for their time, but later gaining worldwide acceptance. During her presentation, Diana will also give you some insight how to win a medal from the Olympic games.
Do you want to understand how to make your home's IQ bigger? Do you want to see some cool geek stuff which could be installed in your home by yourself? Do you want to program all these cool geek stuff by yourself? Come see and learn how to do it :)!
The session will briefly introduce the established Linux-based Docker technology toolset and do a parallel with the current state of the corresponding implementation on Windows. We will present a side-by-side comparison in the context of a typical Docker deployment scenarios done across multiple platforms. Analysis of the strengths and drawbacks of Docker implementation for Windows will be given, both for its current and potential future state.
Testing is very important, but we are underestimating our QAs if we only let them do this part. QAs can be a really strong part of the product team too. Engaging them early into the definition of a product can make a PM’s life much easier, as QA people are great at asking tough questions and giving direct feedback to what will work or not. And this is just a small piece of the benefits you can get. In this session, I will share good experience from working closely with QA people in early stages of projects, and will discuss how QAs can contribute significantly to the overall success of the project by providing valuable input from Day 1 and creating some of the so-called agile specifications.
Sharing my experience as the CTO of an IoT startup company. The team, the business model, the architecture, the technology (hardware, software), security aspects.
In a scaling-oriented IT landscape, developers are bound to create applications that are deployed in the cloud. While this allows for much flexibility and the ability to match high payloads, application deployments have grown in scale and have therefore increased their attack surface. This paper aims at reviewing basic threats to cloud application security and introducing some 'best pratices', related to security in this context. These practices range from basic technical tips for OS/application/network security to guidelines in organizing the development/deployment process.
How to transition into the modern QA world? From testing a huge monolithic platform to dozens distributed microservices. From BDD end-to-end testing to the mocks and in-build tests. We’ll go back and forth between the two sides comparing the strategies, processes and techniques for making sure what the customer receives is top-notch quality product. Answering why we need different mindsets and how to switch between them – even if it requires a bit of madness. We’ll talk from firsthand experience about handling everything on the large scale – many teams, many locations, one code base, taming the quality chaos.
In this session I will introduce you to a novel way of designing information systems - Event Sourcing. This pattern has been applied for ages in other industries, but only recently it was rediscovered in software design. Although the idea is actually pretty simple , it states that all changes to application state should be modeled as a sequence of events, its benefits are numerous: * It fits like a glove building scalable, highly concurrent, distributed systems * Provides a truly transactional audit log of everything that happened in the system * Allows rewinding application state to any point in time for analysis, and even retroactive debugging * And most importantly - although it is a technical pattern, it provides a business value that can be transformed into a competitive advantage The focus of my talk will be the Event Sourcing pattern, but I’ll also briefly describe CQRS - an architecture that goes hand in hand with Event Sourcing. This knowledge will allow you to employ this powerful pattern in your next project.
The cloud solutions are more and more offered to the business and are becoming more and more popular. However, my almost-a-decade experience shows they are good for a lot of surrounding services, but we (the enterprises) are bound to the on-premises environment for our core business. Why is that? How are we doing it? Are we going to move to the cloud soon? I will share my point of view and I would really appreciate if you come and share yours.
Machine learning has been leveraged to radically change many industry verticals. The problem is the learning curve has always been very steep. Exotic languages, complex tools, little or no documentation. But innovative cloud based ML platforms are changing that and democratizing access. During this session you will learn the basics of machine learning, and you will see a demo of how you can build a prediction model using real-world data, evaluate several different algorithms and modeling strategies, then deploy the finished model as an Azure web service within minutes.
Software is the most important line of defense for protecting critical information assets. The continuous increase in sophistication and in volume of cyber security attacks provides compelling reasons for enhancing the security of software applications that control critical assets. There is a broad acceptance that in order to produce dependable and secure applications, developers need to “build security in” throughout the software development lifecycle (SDL). Threat Modeling is essential for building security in at all the SDL stages and in particular at the design stage. In the last few years, several innovative approaches to threat modeling have emerged and recently some supporting tools have become available. Using the Microsoft SDL tool as an example, we will illustrate the process, as well as supported by OWASP best practices.
It took us over 100 years to add wheels to our suitcases. How much time and effort is needed to learn how to test new technologies? In 2015 one of the most popular themes for many testing events was about the technologies of the future. We discussed a lot what we can expect in the next 30 years but we didn’t talk how to test. For my presentation I chose 5 examples on which I would like to show what issues we can face and few ideas how we should prepare for them. Those examples are: Let’s start with something small but powerful - a chip, Let’s go bigger - robots, In 2015 Tesla proved that it’s car can be self-driven, Next growing every second - Big data, Last but not least - Internet of Things. I would like to work with attendees to find possible areas of improvement by asking questions and showing examples of what, in my opinion will be our main challenges in testing: building and supporting test environments that can simulate towns or humans, test against law regulations.
In a dynamic environment where web services are frequently being upgraded, a quick feedback about web service methods’ reliability is essential. This is especially the case in the enterprise environments, where plenty of services are dependent on each other. That information needs to be provided in a limited period of time, with high percentage accuracy. So, in this situation, manual testing is not applicable. In order for the process to be fast and accurate, developer's and tester's skills must be combined. We are going to present to you our custom solution which: - has been proved useful - is highly maintainable - provides an opportunity,for team members with less technical skills, to create and maintain tests
Estimation is in many cases a key part of software development processes, yet developers hate it and estimates are often incorrect. Project deadlines suffer, and so do managers. The talk explores a few psychological biases and their effect on estimation, thus revealing the underlying causes of inaccuracy. It covers the planning fallacy, optimism bias, the valence effect, and more - and provides practical tips for more accurate estimates that make both developers and managers happy.
Do you feel like you have great automated tests? Well, if you do, in this session you will get ideas on how to get more out of your automated tests with very little effort! We will discuss how existing automated tests can be instrumented to not only tell you whether the product they are testing is functionally working or not. With little effort generic functional tests can provide information like the performance degradation of the product over the development cycle, product log quality and automated test coverage. If any of your automated tests are covering customer scenarios through slight modifications these tests can be turned into official product documentation that is guaranteed to be always up to date as these tests are regularly executed. With proper instrumentation of tests it is possible to easily build smart systems that determine which tests are valuable to be executed for every single code check. It is also easy to determine in an automated manner duplicate tests and eliminate unneeded tests to reduce maintenance cost. The techniques that we will discuss are applicable for almost any set of existing tests, however automated tests that can bring the greatest benefit are end-to-end tests and this is why we will concentrate on them.