All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If I understand correctly, TAI64 time scale does not align completely with UTC time scale, so you can expect inaccuracies when trying to convert TAI64 seconds to UTC. There are python modules around ... See more...
If I understand correctly, TAI64 time scale does not align completely with UTC time scale, so you can expect inaccuracies when trying to convert TAI64 seconds to UTC. There are python modules around which do these conversions, so you might need to write or find a custom command to handle this conversion for you.
Hi @hk_baek , the official Splunk documentation is the best guidance! https://splunkbase.splunk.com/app/4607 Ciao. Giuseppe
Hello, I would like some help to convert the TAI64N format to "%m/%d/%Y %H:%M:%S", I tried to use following query: | makeresults | eval identifier="@4000000068022d4b072a211c" | eval tai64n_hex = ... See more...
Hello, I would like some help to convert the TAI64N format to "%m/%d/%Y %H:%M:%S", I tried to use following query: | makeresults | eval identifier="@4000000068022d4b072a211c" | eval tai64n_hex = substr(identifier, 2) | eval tai64_seconds = tonumber(substr(tai64n_hex, 1, 16), 16) - tonumber("4000000000000000", 16) | eval tai64_nanoseconds = tonumber(substr(tai64n_hex, 17, 8), 16) | eval tai64_milliseconds = round(tai64_nanoseconds / 1000000, 3) | eval formatted_time = strftime(tai64_seconds, "%m-%d-%Y %H:%M:%S") . "." . printf("%03d", round(tai64_milliseconds, 0)) | table formatted_time But the value that's returning is incorrect, sometime the time ~5 seconds beyond the _time and sometime it's ~5 seconds behind the _time. I don't see the precise value being shown. The formatted_time should give me an output "2025-04-18 10:45:21.120" but i get this "04-18-2025 10:40:00.120" Can someone assist me on this?
Hi @ws  Is the full path for the JSON file the same from each time you have indexed it? If the path is different then this might explain why it has indexed twice instead of just continuing from wher... See more...
Hi @ws  Is the full path for the JSON file the same from each time you have indexed it? If the path is different then this might explain why it has indexed twice instead of just continuing from where you last ingested. I'm pleased that you were able to get your events split out! Please consider adding Karma / "Liking" the posts which helped Thanks Will
Hi @danielbb  Please could you share a sample event and screenshot of this so we try and repeat this issue and/or diagnose?  Did this answer help you? If so, please consider: Adding karma to sh... See more...
Hi @danielbb  Please could you share a sample event and screenshot of this so we try and repeat this issue and/or diagnose?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @bapun18  Your user will need a role which has write permissions in the app that you want them to be able to upload a lookup to. You can update your app's read/write permissions at https://YourS... See more...
Hi @bapun18  Your user will need a role which has write permissions in the app that you want them to be able to upload a lookup to. You can update your app's read/write permissions at https://YourSplunkInstance/en-US/manager/permissions/search/apps/local/<YourAppName>   You also need the 'upload_lookup_files' capabilty which is part of the default "user" role and anything which inherits the user role. This does also mean that a user can upload a lookup to any app which their role has write permissions to.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Note: As a side effect of this issue, maxKbps(limits.conf) will also be impacted as it requires thruput metrics to function.
I’m looking for the recommended specifications for a GPU server when using the Splunk DSDL App. Any guidance would be appreciated!
Hello @jawahir007 how are you? Found out we can filter by search_title also, where can we find list of IR fields? Thanks for your help!
This is a fantastic case study of how Splunk handles major breaker tokens. Splunk is representing the field, jobName as containing "(W6)" truncating the remainder of the value. I don't believe it ... See more...
This is a fantastic case study of how Splunk handles major breaker tokens. Splunk is representing the field, jobName as containing "(W6)" truncating the remainder of the value. I don't believe it is terminating because of the ") " in the value. After examining how other fields are extracted in this sample, I am convinced that it terminates the string exactly because the ")" closes the opening "(".   I'm sure this is described in some linguistic documents but I don't know how to find them. So here's a series of tests  to observe. The simplest case: | makeresults | eval _raw = "no_separator=abcdef, quote1 = \"abc\"def, quote2 = 'abc'def, bracket1=(abc)def, bracket2=[abc]def, bracket3 = {abc}def, white_space=abc def" | extract kvdelim="=" pairdelim=, Here, I'm explicitly prescribing kvdelim and pairdelim to avoid additional weirdness. bracket1 bracket2 bracket3 no_separator quote1 quote2 white_space (abc) [abc] {abc} abcdef abc 'abc' abc The second one is perhaps trivial except I added a trailing comma after whitespace entry: | makeresults | eval _raw = "quote1a = abc\"def\", quote2a = abc'def', bracket1a=abc(def), bracket2a=abc[def], bracket3a = abc{def}, white_space1=abc def," | extract kvdelim="=" pairdelim=, bracket1a bracket2a bracket3a quote1a quote2a white_space1 abc(def) abc[def] abc{def} abc"def" abc'def' abc def By adding a trailing comma, white_space1 now includes the part after white space. Among these, white space behaviors are the most intriguing.  So, the following is dedicated to its weirdness: | makeresults | eval _raw = "white_space2=abc def, white_space3 =abc def, white_space4= abc def, white_space5 = abc def, white_space6 = abc def, white_space7 = abc def," | extract kvdelim="=" pairdelim=, white_space2 white_space3 white_space5 white_space6 white_space7 abc def abc def abc def abc abc def Here, you see some dynamics between white space(s) before and after "="; white space(s) before and after the first consequential non-space string also have some dynamics. White space dynamics also affects other brackets.  Double quote is perhaps the best protection of intention: | makeresults | eval _raw = "quote1b=\"abc\" def, quote1c =\"abc\" def, quote1d= \"abc\" def, quote1e = \"abc\" def, quote1f = \"abc\" def, quote1g = \"abc\" def," | extract kvdelim="=" pairdelim=, quote1b quote1c quote1e quote1f quote1g abc abc abc abc abc   The takeaway from all these is that developers need to express their intention by properly quote values and, like @PickleRick suggests, judiciously use white spaces.  Unprotected strings are subject to wild guesses by Splunk - or any other language. To joggle Mark's memory: Pierre had launched an initiative to encourage/beg developers to standardize logging practice so logs are more Splunk-friendly. (I would qualify this as "machine-friendly", not just for Splunk.)  Any treatment after logs are written - such as the workaround @livehybrid proposes, is bound to be broken again when careless developers make random decisions.  Your best bet is to carry on the torch and give developers a good whip.
If you remove the read permission for that user role from the app permission, that user will no longer be able to select that app.
Need to provide user upload lookup only on one particular app permission. Hi I need to assign permission to particular role/User so that they can upload there CSV lookup files to only that perticular... See more...
Need to provide user upload lookup only on one particular app permission. Hi I need to assign permission to particular role/User so that they can upload there CSV lookup files to only that perticular app, not to any other apps. Can anyone help me with it.
Hi @kiran_panchavat, In my case, if I don't use INDEXED_EXTRACTIONS = JSON. Which I believe helps to automatically handle and ignore square brackets [] based on the detection of the JSON format. Sin... See more...
Hi @kiran_panchavat, In my case, if I don't use INDEXED_EXTRACTIONS = JSON. Which I believe helps to automatically handle and ignore square brackets [] based on the detection of the JSON format. Since I'm using transforms.conf to assign a sourcetype, every time the file is ingested, the indexer treats the [ character as a separate event. Do you know if there's anyway to ignore the square brackets if i do not use INDEXED_EXTRACTIONS = JSON? Additionally, I've noticed another issue: whenever the JSON file gets overwritten with new content. Whether it contains previously indexed data or new data. My script pulls it again, and the indexer re-indexes the file, resulting in duplicate entries in the index.  
You should rise a support ticket if you are paid customer otherwise create an idea for this in ideas.splunk.com. Community is not an official Splunk support forum and they don’t take and create cases ... See more...
You should rise a support ticket if you are paid customer otherwise create an idea for this in ideas.splunk.com. Community is not an official Splunk support forum and they don’t take and create cases by questions which are asked here.
There's a small mistake in @gcusello's formula.  src_interface and src_int should be coalesced (also a small spelling error), not renamed. (index=network "arp-inspection" OR "packets received") OR (... See more...
There's a small mistake in @gcusello's formula.  src_interface and src_int should be coalesced (also a small spelling error), not renamed. (index=network "arp-inspection" OR "packets received") OR (index=cisco_ise sourcetype=cisco:ise:syslog User_Name="host/*") | eval NetworkDeviceName=coalesce(NetworkDeviceName, Network_Device), src_int = coalesce(src_int, src_interface) | rename mnemonic AS Port_Status | stats earliest(device_time) AS device_time values(User_Name) AS User_Name values(src_ip) AS src_ip values(src_mac) AS src_mac values(message_text) AS message_text values(Location) AS Location values(Port_Status) AS Port_Status BY "NetworkDeviceName" , "src_int" | table device_time, NetworkDeviceName, User_Name, src_int, src_ip, src_mac, message_text, Location, Port_Status  
SIX years later and this is still the behavior. Why is this even allowed to persist? The DESIGN of your software practically means the only foolproof way to deploy ssl is to use the password "passwo... See more...
SIX years later and this is still the behavior. Why is this even allowed to persist? The DESIGN of your software practically means the only foolproof way to deploy ssl is to use the password "password" because splunk *just might not* feel like re-hashing anything.  Do YOU think it's worth your time to fix this for the love of the hundreds of millions of dollars you've earned? @splunk You owe me and my partner some hair. 
This seems to be some fancy modern top-like program. And I supose it shows separate threads of single splunkd process. Notice that the memory usage is identical for all those entries.
I can also confirm that UF is working on my environment with several macOS 15.4 both intel and M3. But initial versions of those have been lower than 9.4 and then those are updated.
It’s exactly this way what you need to do. In Yourcase this must do everything with props and transforms instead of defining it on inputs.conf. Clone sourcetype to sent it to HF and filter it like yo... See more...
It’s exactly this way what you need to do. In Yourcase this must do everything with props and transforms instead of defining it on inputs.conf. Clone sourcetype to sent it to HF and filter it like you need. And send original into local indexers. You can check the next links: - https://community.splunk.com/t5/Getting-Data-In/How-can-I-use-CLONE-SOURCETYPE-to-send-a-cloned-modified-event/m-p/317487 and - https://www.tekstream.com/blog/routing-pii-data-to-multiple-indexes/ Those explains this with samples.
You must add all needed fields in stats command if you want those to be present after its execution. Use values(a) as a values(b) as b like there is already used. Here is one old post which explains... See more...
You must add all needed fields in stats command if you want those to be present after its execution. Use values(a) as a values(b) as b like there is already used. Here is one old post which explains who you should replace different joins in SPL. https://community.splunk.com/t5/Splunk-Search/What-is-the-relation-between-the-Splunk-inner-left-join-and-the/m-p/391288/thread-id/113948