Getting Data In

How to write new requirements for refining Splunk logs?

MS23
Explorer

Hi team,

We are using the Splunk tool at the enterprise level
I have received a requirement to refine and create  the logs in an efficient way which helps the run team to understand and analyse whenever an issue comes. As a BA I need to write the requirements to create informative logs. 

For example - a reference number needs to be included in the error message whenever an API fails.

Can someone please advise or provide any documents/references to start with on what information needs to be provided to redefine such logs and generate alerts?

Labels (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

It highly depends on the use case but from experience, rather than from any documents, I'd say that:

If written to a "continuous" medium (like a logfile or sent via network stream), they should be written atomicaly so that parts of different events are not intermixed. And they should be clearly delimited

Each event should contain a timestamp. Bonus points for reasonable timestamp format and logging in UTC or including timezone information. Points substracted for some exotic time formatting ideas (like specifying date by fortnights since last easter 😉 but seriously - I've seen timezone specified as number of minutes offset from UTC; please don't do that). Timestamp should have resolution relevant to your usecase. If you're logging entries/exits on a factory gate you probably don't need sub-second precission. OTOH you might not want to log your network sessions per hour.

All events from a single source should have the same timestamp format! Bonus points for "static" placement of timestamp within an event. Best solution - start your event with a timestamp.

Be consistent - elements common to multiple event "categories" from a single source should be expressed in the same way (so if you have - for example - "severity" field, put it in a well-known position, delimited or placed in a k-v pair; don't put it as a "third string in array delimited by pipes" in one event and a json field in another).

If there are several events refering to the same entity or process, include some form of ID so separate events can be correlated. Best scenario - let it be some form of ID that can be used outside your logs to find such object (e.g. message-id in email logs).

I'd say that it's a good practice to include in your log events both a strictly defined "machine-readable" part which allows for easy parsing/manipulating/searching as well as a human-readable descriptive part. Kinda similar to windows events.

0 Karma

MS23
Explorer

Hi, I appreciate your reply.

I am looking at writing requirements to an application for monitoring on splunk.

Sorry I didn't understand your response

0 Karma

richgalloway
SplunkTrust
SplunkTrust

The OP used the phrase "create logs" at least twice so we took the question to mean you want to know how to generate log events for sending to Splunk.  If that is not what is wanted then please explain your use case.

---
If this reply helps you, Karma would be appreciated.
0 Karma

MS23
Explorer

Thank you for the response..!

Let me rephrase the question, 

As a Business Analyst, I need to write the requirements to create informative logs. 

For example - a reference number needs to be included in the error message whenever an API fails.

 

I am not looking at the solution, based on the above example how can I make an application team and the Splunk team understand as this is the gap in the logs which needs to modify for better application monitoring.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

The rephrasing seems the same as the original so I'm sticking with my original answer.

---
If this reply helps you, Karma would be appreciated.
0 Karma

richgalloway
SplunkTrust
SplunkTrust

I'm pretty sure Splunk has very little on this subject since they pride themselves on being able to accept (almost) anything.

IMO, every log entry must include a timestamp indicating when the reported event occurred.  This may be different from when the event is detected/reported.  Timestamps must include date, time of day, and (preferably) time zone.  Be consistent in the format of timestamps.

Logs should include a severity indication (error, warning, etc.) for easier filtering.

Logs must be easily parsed by Splunk  I'll leave it to you to define "easily".  It could be key=value, JSON, or just about anything else Splunk can extract fields from using props and transforms.  Definitely avoid ambiguity in the logs - missing fields should be apparent (to a computer).  Position-dependent fields must always be in the same order.

Other requirements will depend on what you plan to do with the logs.  Think about how the logs will be used and, therefore, what they need to contain to make those tasks easier.

Where possible, use shared code to help enforce whatever requirements you create.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...