Planet Linux Australia

Syndicate content
Planet Linux Australia - http://planet.linux.org.au
Updated: 1 hour 43 min ago

James Morris: Linux Security Summit 2016 Wrapup

Sat, 2016-10-01 00:01

Here’s a summary of the 2016 Linux Security Summit, which was held last month in Toronto.

Presentation slides are available at http://events.linuxfoundation.org/events/archive/2016/linux-security-summit/program/slides.

This year, videos were made of the sessions, and they may be viewed at https://www.linux.com/news/linux-security-summit-videos — many thanks to Intel for sponsoring the recordings!

LWN has published some excellent coverage:

This is a pretty good representation of the main themes which emerged in the conference: container security, kernel self-protection, and integrity / secure boot.

Many of the core or low level security technologies (such as access control, integrity measurement, crypto, and key management) are now fairly mature. There’s more focus now on how to integrate these components into higher-level systems and architectures.

One talk I found particularly interesting was Design and Implementation of a Security Architecture for Critical Infrastructure Industrial Control Systems in the Era of Nation State Cyber Warfare. (The title, it turns out, was a hack to bypass limited space for the abstract in the cfp system).  David Safford presented an architecture being developed by GE to protect a significant portion of the world’s electrical grid from attack.  This is being done with Linux, and is a great example of how the kernel’s security mechanisms are being utilized for such purposes.  See the slides or the video.  David outlined gaps in the kernel in relation to their requirements, and a TPM BoF was held later in the day to work on these.  The BoF was reportedly very successful, as several key developers in the area of TPM and Integrity were present.

#linuxsecuritysummit TPM BOF session pic.twitter.com/l1ko9Meiud

— LinuxSecuritySummit (@LinuxSecSummit) August 25, 2016

Attendance at LSS was the highest yet with well over a hundred security developers, researchers and end users.

Special thanks to all of the LF folk who manage the logistics for the event.  There’s no way we could stage something on this scale without their help.

Stay tuned for the announcement of next year’s event!

 

Simon Lyall: DevOpsDays Wellington 2016 – Day 2, Session 1

Fri, 2016-09-30 22:02
Jethro Carr – Powering stuff.co.nz with DevOps goodness

Stuff.co.nz
– “News” Website
– 5 person DevOps team

Devops
– “Something you do because Gartner said it’s cool”
– Sysadmin -> InfraCoder/SRE -> Dev Shepherd -> Dev
– Stuff in the middle somewhere
– DevSecOps

Company Structure drives DevOps structure
– Lots of products – one team != one product
– Dev teams with very specific focus
– Scale – too big, yet to small

About our team
– Mainly Ops focus
– small number compared to developers
– Operate like an agency model for developers
– “If you buy the Dom Post it would help us grow our team”
– Lots of different vendors with different skill levels and technology

Work process
– Use KanBan with Jira
– Works for Ops focussed team
– Not so great for long running projects

War Against OnCall
– Biggest cause of burnout
– focus on minimising callouts
– Zero alarm target
– Love pagerduty

Commonalities across platforms
– Everyone using compute
– Most Java and javascript
– Using Public Cloud
– Using off the shelf version control, deployment solutions
– Don’t get overly creative and make things too complex
– Proven technology that is well tried and tested and skills available in marketplace
– Classic technologist like Nginx, Java, Varnish still have their place. Don’t always need latest fashion

Stack
– AWS
– Linux, ubuntu
– Adobe AEM Java CMS
– AWS 14x c4.2xlarge
– Varnish in front, used by everybody else. Makes ELB and ALB look like toys

How use Varnish
– Retries against backends if 500 replies, serve old copies
– split routes to various backends
– Control CDN via header
– Dynamic Configuration via puppet

CDN
– Akamai
– Keeps online during breaking load
– 90% cache offload
– Management is a bit slow and manual

Lamda
– Small batch jobs
– Check mail reputation score
– “Download file from a vendor” type stuff
– Purge cache when static file changes
– Lamda webapps – Hopefully soon, a bit immature

Increasing number of microservices

Standards are vital for microservices
– Simple and reasonable
– Shareable vendors and internal
– flexible
– grow organicly
– Needs to be detail
– 12 factor App
– 3 languages Node, Java, Ruby
– Common deps (SQL, varnish, memcache, Redis)
– Build pipeline standardise. Using Codeship
– Standardise server builds
– Everything Automated with puppet
– Puppet building docker containers (w puppet + puppetstry)
– Std Application deployment

Init systems
– Had proliferation
– pm2, god, supervisord, systemvinit are out
– systemd and upstart are in

Always exceptions
– “Enterprise ___” is always bad
– Educating the business is a forever job
– Be reasonable, set boundaries

More Stuff at
http://tinyurl.com/notclickbaithonest

Q: Pull request workflow
A: Largely replaced traditional review

Q: DR eg AWS outage
A: Documented process if codeship dies can manually push, Rest in 2*AZs, Snapshots

Q: Dev teams structure
A: Project specific rather than product specific.

Q: Puppet code tested?
A: Not really, Kinda tested via the pre-prod environment, Would prefer result (server spec) testing rather than low level testing of each line
A: Code team have good test coverage though. 80-90% in many cases.

Q: Load testing, APM
A: Use New Relic. Not much luck with external load testing companies

Q: What is somebody wants something non-standard?
A: Case-by-case. Allowed if needed but needs a good reason.

Q: What happens when automation breaks?
A: Documentation is actually pretty good.

Share

Simon Lyall: DevOpsDays Wellington 2016 – Day 1, Session 3

Fri, 2016-09-30 08:03
Owen Evans – DevOps is Dead, long live DevOps

Theory: Devops is role that never existed.

In the old days
– Shipping used to be hard and expensive, eg on physical media
– High cost of release
– but everybody else was the same.
– Lots of QA and red tape, no second chances

Then we got the Internet
– Speed became everything
– You just shipped enough

But Hardware still was a limiting factor
– Virtual machines
– IaaS
– Containers

This led to complacency
– Still had a physical server under it all

Birth of devops
– Software got faster but still had to have hardware under their somewhere
– Disparity between operations cadence and devs cadence
– things got better
– But we didn’t free ourselves from hardware
– Now everything is much more complex

Developers are now divorced from the platform
– Everything is abstracted
– It is leaky buckets all the way down

Solutions
– Education of developers as to what happens below the hood
– Stop reinventing the where
– Harmony is much more productive
– Lots of tools means that you don’t have enough expertise on each
– Reduce fiefdoms
– Push responsibility but not ownership (you own it but the devs makes some of the changes)
– Live with the code
– Pit of success, easy ways to fail that don’t break stuff (eg test environments, by default it will do the right thing)
– Be Happy. Everybody needs to be a bit devops and know a bit of everything.

Share

Simon Lyall: DevOpsDays Wellington 2016 – Day 1, Session 2

Fri, 2016-09-30 08:03
Martina Iglesias – Automatic Discovery of Service metadata for systems at scale

Backend developer at Spotify

Spotify Scale
– 100m active users
– 800+ tech employees
– 120 teams
– Microservices architecture

Walk though Sample artist’s page
– each component ( playlist, play count, discgraphy) is a seperate service
– Aggregated to send result back to client

Hard to co-ordinate between services as scale grows
– 1000+ services
– Each need to use each others APIs
– Dev teams all around the world

Previous Solution
– Teams had docs in different places
– Some in Wiki, Readme, markdown, all different

Current Solution – System Z
– Centralise in one place, as automated as possible
– Internal application
– Web app, catalog of all systems and its parts
– Well integrated with Apollo service

Web Page for each service
– Various tabs
– Configuration (showing versions of build and uptimes)
– API – list of all endpoints for service, scheme, errors codes, etc (automatically populated)
– System tab – Overview on how service is connected to other services, dependencies (generated automatically)

Registration
– System Z gets information from Apollo and prod servers about each service that has been registered

Apollo
– Java libs for writing microservices
– Open source

Apollo-meta
– Metadata module
– Exposes endpoint with metadata for each service
– Exposes
– instance info – versions, uptime
– configuration – currently loaded config of the service
– endpoints –
– call information – monitors service and learns and returns what incoming and outgoing services the service actually does and to/from what other services.
– Automatically builds dependencies

Situation Now
– Quicker access to relevant information
– Automated boring stuff
– All in one place

Learnings
– Think about growth and scaling at the start of the project

Documentation generators
-Apollo
– Swagger.io
– ralm.org

Blog: labs.spotify.com
Jobs: spotify.com/jobs

Q: How to handle breaking APIs
A: We create new version of API endpoint and encourage people to move over.

Bridget Cowie – The story of a performance outage, and how we could have prevented it

– Works for Datacom
– Consultant in Application performance management team

Story from Start of 2015

– Friday night phone calls from your boss are never good.
– Dropped in application monitoring tools (Dynatrace) on Friday night, watch over weekend
– Prev team pretty sure problem is a memory leak but had not been able to find it (for two weeks)
– If somebody tells you they know what is wrong but can’t find it, give details or fix it then be suspicious

Book: Java Enterprise performance

– Monday prod load goes up and app starts crashing
– Told ops team but since crash wasn’t visable yet, was not believed. waited

Tech Stack
– Java App, Jboss on Linux
– Multiple JVMs
– Oracle DBs, Mulesoft ESB, ActiveMQ, HornetQ

Ah Ha moment
– Had a look at import process
– 2.3 million DB queries per half hour
– With max of 260 users, seems way more than what is needed
– Happens even when nobody is logged in

Tip: Typically 80% of all issues can be detected in dev or test if you look for them.

Where did this code come from?
– Process to import a csv into the database
– 1 call mule -> 12 calls to AMQ -> 12 calls to App -> 102 db queries
– Passes all the tests… But
– Still shows huge growth in queries as we go through layers
– DB queries grow bigger with each run

Tip: Know how your code behaves and track how this behavour changes with each code change (or even with no code change)

Q: Why Dynatrace?
A: Quick to deploy, useful info back in only a couple of hours

Share

Simon Lyall: DevOpsDays Wellington 2016 – Day 1, Session 1

Thu, 2016-09-29 22:04
Ken Mugrage – What we’re learning from burnout and how DevOps culture can help

Originally in the Marines, environment where burnout not tolerated
Works for Thoughtworks – not a mental health professional

Devops could make this worse
Some clichéd places say: “Teach the devs puppet and fire all the Ops people”

Why should we address burnout?
– Google found psychological safety was the number 1 indicator of an effective team
– Not just a negative, people do better job when feeling good.

What is burnout
– The Truth about burnout – Maslach and Leiter
– The Dimensions of Burnout
– Exhaustion
– Cynicism
– Mismatch between work and the person
– Work overload
– Lack of control
– Insufficient reward
– Breakdown of communication

Work overload
– Various prioritisation methods
– More load sharing
– Less deploy marathons
– Some orgs see devops as a cost saving
– There is no such thing as a full stack engineer
– team has skills, not a person

Lack of Control
– Team is ultimately for the decissions
– Use the right technolgy and tools for the team
– This doesnt mean a “Devops team” contolling what others do

Insufficient Reward
– Actually not a great motivator

Breakdown in communication
– Walls between teams are bad
– Everybody involved with product should be on the same team
– 2 pizza team
– Pairs with different skill sets are common
– Swarming can be done when required ( one on keyboard, everybody else watching and talking and helping on big screen)
– Blameless retrospectives are held
– No “Devops team”, creating a silo is not a solution for silos

Absence of Fairness
– You build it, you run it
– Everybody is responsible for quality
– Everybody is measured in the same way
– example Expedia – *everything* deployed has A/B tesing
– everybody goes to release party

Conflicting Values
– In the broadest possible sense
– eg Company industry and values should match your own

Reminder: it is about you and how you fit in with the above

Pay attention to how you feel
– Increase your self awareness
– Maslach Burnout inventory
– Try not to focus on the negative.

Pay attention to work/life balance
– Ask for it, company might not know your needs
– If you can’t get it then quit

Talk to somebody
– Professional help is the best
– Trained to identify cause and effect
– can recommend treatment
– You’d call them if you broke your arm

Friends and family
– People who care, that you haven’t even meet
– Empathy is great , but you aren’t a professional
– Don’t guess cause and effect
– Don’t recommend treatment if not a professional

Q: Is it Gender specific for men (since IT is male dominated) ?
– The “absence of fairness” problem is huge for women in IT

Q: How to promote Psychological safety?
– Blameless post-mortems

 

Damian Brady – Just let me do my job

After working in govt, went to work for new company and hoped to get stuff done

But whole dev team was unhappy
– Random work assigned
– All deadlines missed
– Lots of waste of time meetings

But 2 years later
– Hitting all deadlines
– Useful meetings

What changes were made?

New boss, protect devs for MUD ( Meetings, uncertainty, distractions )

Meetings
– In board sense, 1-1, all hands, normal meetings
– People are averaging 7.5 hours/week in meetings
– On average 37% of meeting time is not relevant to person ( ~ $8,000 / year )
– Do meetings have goals and do they achieve those goals?
– 38% without goals
– only half of remaining meet those goals
– around 40% of meetings have and achieve goals
– Might not be wasted. Look at “What has changed as result of this meeting?”

Meetings fixes
– New Boss went to meetings for us (didn’t need everybody) as a representative
– Set a clear goal and agenda
– Avoid gimmicks
– don’t default to 30min or 1h

Distractions
– 60% of people interrupted 10 or more times per day
– Good to stay in a “flow state”
– 40% people say they are regularly focussed in their work. but all are sometimes
– 35% of time loss focus when interrupted
– Study shows people can take up to 23mins to get focus back after interruption
– $25,000/year wasting according to interruptions

Distraction Fixes
– Allowing headphones, rule not to interrupt people wearing headphones
– “Do not disturb” times
– Little Signs
– Had “the finger” so that you could tell somebody your were busy right now and would come back to them
– Let devs go to meeting rooms or cafes to hide from interruptions
– All “go dark” where email and chat turned off

Uncertainty
– 82% in survey were clear
– nearly 60% of people their top priority changes before they can finish it.
– Autonomy, mastery, purpose

Uncertainty Fixes
– Tried to let people get clear runs at work
– Helped people acknowledge the unexpected work, add to Sprint board
– Established a gate – Business person would have to go through the manager
– Make the requester responsible – made the requester decide what stuff didn’t get done by physically removing stuff from the sprint board to add their own

Share