All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@jkat54 Hello, I find out I get lastTime tstats metadata from export api endpoint ran from CLI and not getting this lastTime field on web search with same query, even if lastTime info is from last ye... See more...
@jkat54 Hello, I find out I get lastTime tstats metadata from export api endpoint ran from CLI and not getting this lastTime field on web search with same query, even if lastTime info is from last year from offline UF. I guess there is maybe web filtering. This only applies to single result though. Results can be different I guess due to different user/role, app context, api endpoint which may be my case. Thanks.
In splunk _raw is only one line, but it can contains e.g. \n character. You could see it e.g. “table _raw”
Hi @LearningGuy  When using makeresults which is a report-generating command you get a table output. When I want to get a JSON tree view you need it to be an eventbased output, I use this little tr... See more...
Hi @LearningGuy  When using makeresults which is a report-generating command you get a table output. When I want to get a JSON tree view you need it to be an eventbased output, I use this little tricky to get an event and then override with eval _raw like this: index=_internal | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, How to create sample JSON data and display it in tree structure? I used makeresults to create sample JSON data below | makeresults | eval data = "{\"name\":\"John Doe\",\"age\":30,\"addres... See more...
Hello, How to create sample JSON data and display it in tree structure? I used makeresults to create sample JSON data below | makeresults | eval data = "{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}" The search result is below. My expected output is below. I have the option to select "list" from the drop down, but this option is only available if I import the data to an index.  Please help. Thanks   JSON data: { "name": "John Doe", "age": 30, "address": { "street": "123 Main St", "city": "Anytown", "state": "CA", "zip": "12345" }, "interests": [ "reading", "hiking", "coding" ] }  
Thank you, @livehybrid, @richgalloway, I'll get screenshots but, a related question, how do I access the second line of _raw?
This seems to be working great, over 24 hours I get a few quirks but I can live with it. Thank you.
Hi @ranandeshi  I've posted an updated SPL directly on the question, but you can make this a single EVAL with: | eval formatted_time = strftime((tonumber(substr(identifier,2,16),16) - tonumber("400... See more...
Hi @ranandeshi  I've posted an updated SPL directly on the question, but you can make this a single EVAL with: | eval formatted_time = strftime((tonumber(substr(identifier,2,16),16) - tonumber("4000000000000000",16) + tonumber(substr(identifier,18,8),16) / 1000000000), "%Y-%m-%d %H:%M:%S") . "." . printf("%03d", round(tonumber(substr(identifier,18,8),16) / 1000000, 0)) This means you could possible use INGEST_EVAL to overwrite the _time field: == props.conf == [yourSourcetype] TRANSFORMS-taiTime = taiTimeExtract == transforms.conf == [taiTimeExtract] INGEST_EVAL = _time:=strftime((tonumber(substr(identifier,2,16),16) - tonumber("4000000000000000",16) + tonumber(substr(identifier,18,8),16) / 1000000000), "%Y-%m-%d %H:%M:%S") . "." . printf("%03d", round(tonumber(substr(identifier,18,8),16) / 1000000, 0))  However this assumes "identifier" is a field it can eval against. You might need to extract this first. Do you have a sample event I can work on to help or is this enough to get you started?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I've been writing new pipelines to my Edge Processors when I discovered that no destination values are showing up for me to select. We only have two, our default destination of our cloud instance and... See more...
I've been writing new pipelines to my Edge Processors when I discovered that no destination values are showing up for me to select. We only have two, our default destination of our cloud instance and an additional cloud instance. When I go to create a new pipeline or modify an old one that already is configured to go to the default destination, it doesn't show up. The ones already created are still working and sending data in. Any idea what could be causing this to happen? This occurred very recently. Prior to this I was able to create Pipelines and add a destination. Even a work-around would be appreciated. Thanks!
Hi To accurately convert TAI64N to a human-readable timestamp in Splunk, you need to: Subtract the TAI64 epoch offset (0x4000000000000000) from the first 16 hex digits (seconds) Add the nanoseco... See more...
Hi To accurately convert TAI64N to a human-readable timestamp in Splunk, you need to: Subtract the TAI64 epoch offset (0x4000000000000000) from the first 16 hex digits (seconds) Add the nanoseconds (next 8 hex digits) as a fractional part Format the result using strftime and printf Here's the corrected SPL: | makeresults | eval identifier="@4000000068022d4b072a211c" | eval tai64n_hex = substr(identifier, 2) | eval tai64_seconds = tonumber(substr(tai64n_hex, 1, 16), 16) - tonumber("4000000000000000", 16) | eval tai64_nanoseconds = tonumber(substr(tai64n_hex, 17, 8), 16) | eval tai64_epoch = tai64_seconds + (tai64_nanoseconds / 1000000000) | eval formatted_time = strftime(tai64_epoch, "%Y-%m-%d %H:%M:%S") . "." . printf("%03d", round((tai64_nanoseconds/1000000),0)) | table formatted_time tai64_seconds extracts and normalises the seconds since Unix epoch. tai64_nanoseconds extracts the nanoseconds. tai64_epoch combines seconds and fractional seconds. strftime formats the timestamp, and printf ensures milliseconds are zero-padded. Note: TAI64N timestamps are based on TAI, not UTC. TAI is ahead of UTC by a number of leap seconds (currently 37). Splunk and most systems use UTC, so your converted time may be offset by this difference. If you need exact UTC, subtract the current TAI-UTC offset (e.g., 37 seconds) from tai64_epoch Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
okay, thank you for your reply, Is it possible to parse TAI64N timestamp while indexing, if so, How can we do it?
If I understand correctly, TAI64 time scale does not align completely with UTC time scale, so you can expect inaccuracies when trying to convert TAI64 seconds to UTC. There are python modules around ... See more...
If I understand correctly, TAI64 time scale does not align completely with UTC time scale, so you can expect inaccuracies when trying to convert TAI64 seconds to UTC. There are python modules around which do these conversions, so you might need to write or find a custom command to handle this conversion for you.
Hi @hk_baek , the official Splunk documentation is the best guidance! https://splunkbase.splunk.com/app/4607 Ciao. Giuseppe
Hello, I would like some help to convert the TAI64N format to "%m/%d/%Y %H:%M:%S", I tried to use following query: | makeresults | eval identifier="@4000000068022d4b072a211c" | eval tai64n_hex = ... See more...
Hello, I would like some help to convert the TAI64N format to "%m/%d/%Y %H:%M:%S", I tried to use following query: | makeresults | eval identifier="@4000000068022d4b072a211c" | eval tai64n_hex = substr(identifier, 2) | eval tai64_seconds = tonumber(substr(tai64n_hex, 1, 16), 16) - tonumber("4000000000000000", 16) | eval tai64_nanoseconds = tonumber(substr(tai64n_hex, 17, 8), 16) | eval tai64_milliseconds = round(tai64_nanoseconds / 1000000, 3) | eval formatted_time = strftime(tai64_seconds, "%m-%d-%Y %H:%M:%S") . "." . printf("%03d", round(tai64_milliseconds, 0)) | table formatted_time But the value that's returning is incorrect, sometime the time ~5 seconds beyond the _time and sometime it's ~5 seconds behind the _time. I don't see the precise value being shown. The formatted_time should give me an output "2025-04-18 10:45:21.120" but i get this "04-18-2025 10:40:00.120" Can someone assist me on this?
Hi @ws  Is the full path for the JSON file the same from each time you have indexed it? If the path is different then this might explain why it has indexed twice instead of just continuing from wher... See more...
Hi @ws  Is the full path for the JSON file the same from each time you have indexed it? If the path is different then this might explain why it has indexed twice instead of just continuing from where you last ingested. I'm pleased that you were able to get your events split out! Please consider adding Karma / "Liking" the posts which helped Thanks Will
Hi @danielbb  Please could you share a sample event and screenshot of this so we try and repeat this issue and/or diagnose?  Did this answer help you? If so, please consider: Adding karma to sh... See more...
Hi @danielbb  Please could you share a sample event and screenshot of this so we try and repeat this issue and/or diagnose?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @bapun18  Your user will need a role which has write permissions in the app that you want them to be able to upload a lookup to. You can update your app's read/write permissions at https://YourS... See more...
Hi @bapun18  Your user will need a role which has write permissions in the app that you want them to be able to upload a lookup to. You can update your app's read/write permissions at https://YourSplunkInstance/en-US/manager/permissions/search/apps/local/<YourAppName>   You also need the 'upload_lookup_files' capabilty which is part of the default "user" role and anything which inherits the user role. This does also mean that a user can upload a lookup to any app which their role has write permissions to.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Note: As a side effect of this issue, maxKbps(limits.conf) will also be impacted as it requires thruput metrics to function.
I’m looking for the recommended specifications for a GPU server when using the Splunk DSDL App. Any guidance would be appreciated!
Hello @jawahir007 how are you? Found out we can filter by search_title also, where can we find list of IR fields? Thanks for your help!
This is a fantastic case study of how Splunk handles major breaker tokens. Splunk is representing the field, jobName as containing "(W6)" truncating the remainder of the value. I don't believe it ... See more...
This is a fantastic case study of how Splunk handles major breaker tokens. Splunk is representing the field, jobName as containing "(W6)" truncating the remainder of the value. I don't believe it is terminating because of the ") " in the value. After examining how other fields are extracted in this sample, I am convinced that it terminates the string exactly because the ")" closes the opening "(".   I'm sure this is described in some linguistic documents but I don't know how to find them. So here's a series of tests  to observe. The simplest case: | makeresults | eval _raw = "no_separator=abcdef, quote1 = \"abc\"def, quote2 = 'abc'def, bracket1=(abc)def, bracket2=[abc]def, bracket3 = {abc}def, white_space=abc def" | extract kvdelim="=" pairdelim=, Here, I'm explicitly prescribing kvdelim and pairdelim to avoid additional weirdness. bracket1 bracket2 bracket3 no_separator quote1 quote2 white_space (abc) [abc] {abc} abcdef abc 'abc' abc The second one is perhaps trivial except I added a trailing comma after whitespace entry: | makeresults | eval _raw = "quote1a = abc\"def\", quote2a = abc'def', bracket1a=abc(def), bracket2a=abc[def], bracket3a = abc{def}, white_space1=abc def," | extract kvdelim="=" pairdelim=, bracket1a bracket2a bracket3a quote1a quote2a white_space1 abc(def) abc[def] abc{def} abc"def" abc'def' abc def By adding a trailing comma, white_space1 now includes the part after white space. Among these, white space behaviors are the most intriguing.  So, the following is dedicated to its weirdness: | makeresults | eval _raw = "white_space2=abc def, white_space3 =abc def, white_space4= abc def, white_space5 = abc def, white_space6 = abc def, white_space7 = abc def," | extract kvdelim="=" pairdelim=, white_space2 white_space3 white_space5 white_space6 white_space7 abc def abc def abc def abc abc def Here, you see some dynamics between white space(s) before and after "="; white space(s) before and after the first consequential non-space string also have some dynamics. White space dynamics also affects other brackets.  Double quote is perhaps the best protection of intention: | makeresults | eval _raw = "quote1b=\"abc\" def, quote1c =\"abc\" def, quote1d= \"abc\" def, quote1e = \"abc\" def, quote1f = \"abc\" def, quote1g = \"abc\" def," | extract kvdelim="=" pairdelim=, quote1b quote1c quote1e quote1f quote1g abc abc abc abc abc   The takeaway from all these is that developers need to express their intention by properly quote values and, like @PickleRick suggests, judiciously use white spaces.  Unprotected strings are subject to wild guesses by Splunk - or any other language. To joggle Mark's memory: Pierre had launched an initiative to encourage/beg developers to standardize logging practice so logs are more Splunk-friendly. (I would qualify this as "machine-friendly", not just for Splunk.)  Any treatment after logs are written - such as the workaround @livehybrid proposes, is bound to be broken again when careless developers make random decisions.  Your best bet is to carry on the torch and give developers a good whip.