All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you so much for your help! Much appreciated
Search artifacts is the collection of data produced by a search.  It includes the results of the search plus the search log, telemetry, and more.  Artifacts should be replicated so they can be retrie... See more...
Search artifacts is the collection of data produced by a search.  It includes the results of the search plus the search log, telemetry, and more.  Artifacts should be replicated so they can be retrieved by other search heads in the cluster, for example if you are disconnected and reconnect to a different SH you still have access to the artifacts. The knowledge bundle is the set of information indexers need to service a search query.  It includes .conf files, lookup files, external commands, and more.  The bundle supplies data needed for search-time operations that the indexer otherwise would not have (user KOs, for example).
Hi @PATAN, Is this a Splunk-specific question or a general web and ServiceNow development question? For Splunk, start with https://dev.splunk.com/enterprise/docs/devtools/javascript/sdk-javascript/.... See more...
Hi @PATAN, Is this a Splunk-specific question or a general web and ServiceNow development question? For Splunk, start with https://dev.splunk.com/enterprise/docs/devtools/javascript/sdk-javascript/. The old Building Splunk Solutions books are still useful but quite dated at this point. Depending on your specific use case, you may already have the functionality you need available in simpler forms. For example, Splunk Add-on for ServiceNow includes commands and workflow actions to create ServiceNow events and incidents. What do you have so far?
The default behavior is for the indexer to create enough copies of the data to meet the RF before sending an ACK.  That can be changed in server.conf, however.  See https://docs.splunk.com/Documentat... See more...
The default behavior is for the indexer to create enough copies of the data to meet the RF before sending an ACK.  That can be changed in server.conf, however.  See https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Useforwarders#How_indexer_acknowledgment_works for details.
Hi @riposans, The text doesn't appear to be syslog (see RFC 3164 and RFC 5424). Is this a raw TCP or UDP stream? You may want to try: [ABC] SHOULD_LINEMERGE = false LINE_BREAKER = ()\{"@timestamp" ... See more...
Hi @riposans, The text doesn't appear to be syslog (see RFC 3164 and RFC 5424). Is this a raw TCP or UDP stream? You may want to try: [ABC] SHOULD_LINEMERGE = false LINE_BREAKER = ()\{"@timestamp" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%9Q%Z TIME_PREFIX = {"@timestamp":" MAX_TIMESTAMP_LOOKAHEAD = 30 # for events IN THE YEAR 2000 (thanks, Conan!) MAX_DAYS_AGO = 10000  
If you still see "permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open," make sure the file is owned by your Splunk user and change the permissions to user (owner) read or ... See more...
If you still see "permissions on /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key are too open," make sure the file is owned by your Splunk user and change the permissions to user (owner) read or read+write: $ chmod 0600 /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key If you're using a file system that supports extended ACLs, also make sure none are applied. You can check with getfacl: $ getfacl -p /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key # file: /opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key # owner: splunk # group: splunk user::rw- group::--- other::---  
Hi @Stopplis, Similar but with input type=checkbox. If you need examples, please post a new question with more detail, and I (or someone else if they get to it first) will be happy to help. Edit: L... See more...
Hi @Stopplis, Similar but with input type=checkbox. If you need examples, please post a new question with more detail, and I (or someone else if they get to it first) will be happy to help. Edit: Looks like you did and have an answer pending. Enjoy!
Hi @Shetry, This should be posted as a new question, but briefly, Splunk Universal Forwarder and Splunk Enterprise share the same (or a similar) codebase. Binary detection, event breaking, and more ... See more...
Hi @Shetry, This should be posted as a new question, but briefly, Splunk Universal Forwarder and Splunk Enterprise share the same (or a similar) codebase. Binary detection, event breaking, and more are handled in parsingQueue. If force_local_processing is enabled in props.conf, line breaking, timestamp extraction, and transforms can also be handled by a universal forwarder. See the following for a high resolution PDF of the last (v7.2) pipeline diagram. It's still applicable today, but you'll need to cross reference Splunk documentation for the latest features. https://web.archive.org/web/20220125091543/https://wiki.splunk.com/Community:HowIndexingWorks https://web.archive.org/web/20220125091543/https://wiki.splunk.com/File:Splunk_EventProcessing_v20_1_UF_Indexer.pdf
| rex mode=sed "s/(\"Data\":\s+)\"/\1[/g s/(\"Data\":\s+\[{.*})\"/\1]/g s/\\\\\"/\"/g" | extract pairdelim="\"{,}" kvdelim=":"  Thankyou for your help, the above worked, but I want it to be impleme... See more...
| rex mode=sed "s/(\"Data\":\s+)\"/\1[/g s/(\"Data\":\s+\[{.*})\"/\1]/g s/\\\\\"/\"/g" | extract pairdelim="\"{,}" kvdelim=":"  Thankyou for your help, the above worked, but I want it to be implemented at index time , not at search time.
1. Actual Data looks like below. Data in string format " { } "   2. From UI using the below worked to some extent. Data string to list [ { } ] | rex mode=sed "s/(\"Data\":\s+)\"/\1[/g s/(\"Dat... See more...
1. Actual Data looks like below. Data in string format " { } "   2. From UI using the below worked to some extent. Data string to list [ { } ] | rex mode=sed "s/(\"Data\":\s+)\"/\1[/g s/(\"Data\":\s+\[{.*})\"/\1]/g s/\\\\\"/\"/g" Issue now is it is not automatically identifying the key value pairs inside the Data Dictionary, irrespective of the setting kv_mode =json.  
Can you tell me how to contact community support?
1. https://regex101.com/r/jPZ4yy/1 2. https://regex101.com/r/PmwS2C/1 3. https://regex101.com/r/SBMRme/1 - first regex, I have provided sample of 3 events, ( EntityValue, Name, Ids, anything in ... See more...
1. https://regex101.com/r/jPZ4yy/1 2. https://regex101.com/r/PmwS2C/1 3. https://regex101.com/r/SBMRme/1 - first regex, I have provided sample of 3 events, ( EntityValue, Name, Ids, anything in json format comes) - thrid regex, sed works on _raw but it should work only between Data dictionary value. Example see (\"Comments\": \"New alert\", ) is also changed, nothing else should be formated.
Enable the CAP_DAC_READ_SEARCH capability.  See https://docs.splunk.com/Documentation/Forwarder/9.2.1/Forwarder/Installleastprivileged
Can anyone tell me the best practice for splunkfwd user to access  others and root own dir/logs ?   Not interested in changing dir/log ownership. We could do ACL - lots of work there. Out of the ... See more...
Can anyone tell me the best practice for splunkfwd user to access  others and root own dir/logs ?   Not interested in changing dir/log ownership. We could do ACL - lots of work there. Out of the box what is the access level of the splunkfwd post install ?        
It is unlikely to be random, since it is generated by a system. There is likely to be some pattern to it, but if you do not share that information, it is unlikely that we will be able to guess it, an... See more...
It is unlikely to be random, since it is generated by a system. There is likely to be some pattern to it, but if you do not share that information, it is unlikely that we will be able to guess it, and therefore would be wasting our time attempting to provide a solution until you provide sufficient relevant details.
Has the bug been resolved in Splunk Enterprise version 9.2.1 (latest version)?
It is not that you will always have Entity Value next to data. It is random.
Hello, there I hope you are doing well. I was studying Splunk basics and came to an image that made me ask the same question you have asked here, but I don't understand the explanation. I would be... See more...
Hello, there I hope you are doing well. I was studying Splunk basics and came to an image that made me ask the same question you have asked here, but I don't understand the explanation. I would be grateful if you could explain to my why the UF has a parsing queue in it  Thank you 
Try something like this | rex mode=sed "s/(Data\": )\"/\1[/g s/}\"(, \"EntityType)/}]\1]/g s/\\\\\"/\"/g"
Input Event : [so much data exists in the same single line ] ,"Comments": "New alert", "Data": "{\"etype\":\"MalwareFamily\",\"at\":\"2024-06-21T11:34:07.0000000Z\",\"md\":\"2024-06-21T11:34:07.00000... See more...
Input Event : [so much data exists in the same single line ] ,"Comments": "New alert", "Data": "{\"etype\":\"MalwareFamily\",\"at\":\"2024-06-21T11:34:07.0000000Z\",\"md\":\"2024-06-21T11:34:07.0000000Z\",\"Investigations\":[{\"$id\":\"1\",\"Id\":\"urn:ZappedUrlInvestigation:2cc87ae3\",\"InvestigationStatus\":\"Running\"}],\"InvestigationIds\":[\"urn:ZappedUrlInvestigation:2cc8782d063\"],\"Intent\":\"Probing\",\"ResourceIdentifiers\":[{\"$id\":\"2\",\"AadTenantId\":\"2dfb29-729c918\",\"Type\":\"AAD\"}],\"AzureResourceId\":null,\"WorkspaceId\":null,\"Metadata\":{\"CustomApps\":null,\"GenericInfo\":null},\"Entities\":[{\"$id\":\"3\",\"MailboxPrimaryAddress\":\"abc@gmail.com\",\"Upn\":\"abc@gmail.com\",\"AadId\":\"6eac3b76357\",\"RiskLevel\":\"None\",\"Type\":\"mailbox\",\"Urn\":\"urn:UserEntity:10338af2b6c\",\"Source\":\"TP\",\"FirstSeen\":\"0001-01-01T00:00:00\"}, \"StartTimeUtc\": \"2024-06-21T10:12:37\", \"Status\": \"Investigation Started\"}","EntityType": "MalwareFamily", [so much data exists in the same single line ] In a single line, there exists so much data, I want to substitue(\") with (") only that falls between Data dictionary value, nothing before and nothing after. sample regex : https://regex101.com/r/Gsfaay/1 ( highlighted data only in group 4 should be modified.) And the Dictionary value is enclosed between quotes(as string) want it to be replaced by []braces as list ( group 3 and 6 ) Ouptut Required : [so much data exists in the same single line ],"Comments": "New alert", "Data": [{"etype":"MalwareFamily", so on,"Status":"Investigation Started"}],"EntityType": "MalwareFamily", [so much data exists in the same single line ]   Trials :  [testing_logs] SEDCMD-DataJson = s/\\\"/\"/g s/"Data": "{"/"Data": \[{"/g s/("Data": \[{".*})",/$1],/g INDEXED_EXTRACTIONS = json KV_MODE = json I tried it in the multiple steps as mentioned in my above example, but In splunk sedcmd works on the entire _raw value. I shouldnt apply it globally 1. regex101.com/r/0g2bcL/1  2. regex101.com/r/o3eFgJ/1  3. regex101.com/r/D7Of0v/1  only issue with the first regex, it shouldnt be applied globally on entire event value, it should be applying only between data dictionary value.