All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Check this server.conf setting that you may be  missing: goes under sslOptions sslCommonNameToCheck = <commonName1>, <commonName2>, ... * If set, and 'sslVerifyServerCert' is set to "true", splun... See more...
Check this server.conf setting that you may be  missing: goes under sslOptions sslCommonNameToCheck = <commonName1>, <commonName2>, ... * If set, and 'sslVerifyServerCert' is set to "true", splunkd limits most outbound HTTPS connections to hosts which use a certificate with one of the listed common names.  
In order to be able to colour them differently, you need separate fields. Try adding this to the end of your search | transpose 0 header_field=category
I should expect not.  
Try something along these lines ^name=\"([^\"]*)\",value=(\[([^\]]+)\]|\"[^\"]+\")(.*) https://regex101.com/r/id6m8s/1
Remember that during the ingestion phase Splunk mostly processes the event as a whole - extractions (unless you have indexed fields) are done in search time. So if you wanted to encrypt part of the ... See more...
Remember that during the ingestion phase Splunk mostly processes the event as a whole - extractions (unless you have indexed fields) are done in search time. So if you wanted to encrypt part of the raw message (for now even leaving aside the question how to do it), you'd have to extract a part of the message into a field, encrypt this field, replace the original part of the raw message with the encrypted field value and finally "forget" the extracted and encrypted field values (so they do not get indexed alongside the raw event). Very, very ugly and error-prone. And we haven't even touched the question about _how_ to encrypt the value.
Yes I have seen this exactly. But is it possible to work around this in any way?
Do a quick test: [ | makeresults | eval search="| makeresults" ] If you look into the job log you'll see that while the internal search will get expanded to Expanded index search = ([ | makeresult... See more...
Do a quick test: [ | makeresults | eval search="| makeresults" ] If you look into the job log you'll see that while the internal search will get expanded to Expanded index search = ([ | makeresults | eval search="| makeresults" ]) After the subsearch is evaluated and the result is returned to the outer search it will be treated as a string, with the pipe control character escaped Expanded index search = (\| makeresults) Which means that you will be searching for literal pipe character and "makeresults" word.
Hi @rphillips_splk where can I find the doc for commands like ./splunk _internal call /services/data/inputs/monitor/_reload -auth admin:changeme Can I do a post whit it ?
I am sorry for the confusion, I updated the original question.   The idea is to dynamically create strings of eval commands in a sub search (depending on a lookup e.g.) and then applying these to t... See more...
I am sorry for the confusion, I updated the original question.   The idea is to dynamically create strings of eval commands in a sub search (depending on a lookup e.g.) and then applying these to the base search by literally putting the into the search command. I hope I could clarify this now.
Perhaps you could try changing the line breaking? Try something like this LINE_BREAKER = timestamp\":\"[^\"]+\"}}([\r\n]+)  
Hello, Thank you for your reply but it doesn't work Maybe it isn't possible to convert the JSON data that I got from DB connect  
Hi @duesser, pleae try this: index=abc [ | makeresults | addinfo | eval earliest=relative_time(info_min_time,"-60s"), latest=info_max_time |... See more...
Hi @duesser, pleae try this: index=abc [ | makeresults | addinfo | eval earliest=relative_time(info_min_time,"-60s"), latest=info_max_time | fields earliest latest ] Ciao. Giuseppe
Hi @manojchacko78, you can use fillnull to replace spaces with  "NA" | rex field=AddtionalData "Business unit:(?<BusinessUnit>[^,]+)" | rex field=AddtionalData "Location code:(?<Locationcode>[^,]+)... See more...
Hi @manojchacko78, you can use fillnull to replace spaces with  "NA" | rex field=AddtionalData "Business unit:(?<BusinessUnit>[^,]+)" | rex field=AddtionalData "Location code:(?<Locationcode>[^,]+)" | rex field=AddtionalData "Job code :(?<Jobcode>[^,]+)" | fillnull value="NA" BusinessUnit | fillnull value="NA" Locationcode | fillnull value="NA" Jobcode | stats count by BusinessUnit Locationcode Jobcode | fields - count Ciao. Giuseppe
I am extracting these three values and if there is any empty value in any of the fields, it returns as no result. How i replace the blank values with NA in the rex statements   | rex field=Addtion... See more...
I am extracting these three values and if there is any empty value in any of the fields, it returns as no result. How i replace the blank values with NA in the rex statements   | rex field=AddtionalData "Business unit:(?<BusinessUnit>[^,]+)" | rex field=AddtionalData "Location code:(?<Locationcode>[^,]+)" | rex field=AddtionalData "Job code :(?<Jobcode>[^,]+)" | stats count by  BusinessUnit Locationcode Jobcode | fields - count
Please share the raw JSON rather than a formatted version so volunteers can try out solutions. Please use a code block </> to paste the raw JSON into to preserve the formatting from the original event.
@inventsekar  Yes its a cluster envrioment. we have 6 indexers. we have single  SH.   Yes those files are taking upto 68 GB .
I know that I can do index=abc [ | makeresults | addinfo | eval filter_t="earliest=".(info_min_time-60)." latest=".info_max_time | return fil... See more...
I know that I can do index=abc [ | makeresults | addinfo | eval filter_t="earliest=".(info_min_time-60)." latest=".info_max_time | return filter_t ] which literally becomes  index=abc earliest=1698301592.0 latest=1698301792.0  and I would like to use this behavior to dynamically define a command
@richgalloway in table i got empty column for  last_successful_login  
HI @duesser, when you use a subsearch, you run a search on the main search using the output (exactly the fields you have in return or in fields). What's your requirement? Ciao. Giuseppe
Hi All, I am looking for solution to integrate Splunk in AWS with HIPAA compliance. How this is setup ? Is private link required for Hipaa complaince?