All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

that solution will work when we have a common field in both, but that's the case here
Found it. CAP_DAC_READ_SEARCH means splunk can read anything. Now I have to decide if I want to keep this setting. https://community.splunk.com/t5/Installation/Security-issue-Splunk-UF-v9-x-is-re-ad... See more...
Found it. CAP_DAC_READ_SEARCH means splunk can read anything. Now I have to decide if I want to keep this setting. https://community.splunk.com/t5/Installation/Security-issue-Splunk-UF-v9-x-is-re-adding-readall-capability/m-p/649047/highlight/true
Hi everyone,   I have a json data payload as below:     { location: US all_results: { serial_a: { result: PASS, version: 123, data:[ data1, ... See more...
Hi everyone,   I have a json data payload as below:     { location: US all_results: { serial_a: { result: PASS, version: 123, data:[ data1, data2, data3 ] }, serial_b: { result: PASS, version: 456, data:[ data4, data5 ] }, serial_c: { result: FAIL, version: 789, data:[ data6, data7 ] } } }         and I would like to use splunk query and make a table as: serial_number  result version  data serial_a PASS 123 data1, data2, data3 serial_b PASS 456 data4, data5, serial_c fail 789 data6, data7   how to use splunk query to organize the result? I know I'm able to grab the data by: | spath path=all_results output=all_results | eval all_results=json_extract(all_results) The difficult part is at the serial_number. They have some common prefix serial, but it's dynamic. Therefore , when I try to grab the data inside serial_number, for example version, I'm not able to use query like: | spath  output=version path=all_result.serial*.version Could you give me some idea to do that? thank you!
Hello. I have some issues with field parsing for the CSV files using props configuration. I should be getting 11 fields for each of the events/rows, but parsing is giving me 17 fields. Here are the ... See more...
Hello. I have some issues with field parsing for the CSV files using props configuration. I should be getting 11 fields for each of the events/rows, but parsing is giving me 17 fields. Here are the 3 sample events (First row is header row) from that CSV file and the props.conf file is also provided below: Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8,Col9,Col10,Col11 APIDEV,4xs54,000916,DEV,Update,Integrate,String\,Set\,Number\,ID,Standard,2024-07-10T23:10:45.001Z,Process_TIME\,URI\,Session_Key\,Services,Hourly APITEST,4ys34,000916,TEST,Update,Integrate,String\,Set\,Number\,String,Typicall\,Response,2024-07-10T23:10:45.021Z,CPU_TIME\,URI\,Session_Key\,Type\,Request,Monthly APITEST,4ys34,000916,DEV,Insert,Integrate,Char\,Set\,System\,ID,On_Demand,2024-07-10T23:10:45.051Z,CPU_TIME\,URI\,Session_Key\,Services,Hourly   *Bold texts in each of the events should count one field    props.conf [mypropscon] SHOULD_LINEMERGE=False LINE_BREAKER=([\r\n]+) INDEXED_EXTRACTIONS=CSV KV_MODE=none disabled=false TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%QZ HEARDER_FIELD_LINE_NUMBER=1   Any recommendation to resolve that issue will be highly appreciated. Thank you so much for your support as always.       
Hi @gcusello, The funny part is I have the opposite problem. I haven't given the user access to read /var/log/messages yet it seems like splunk still reads them. How do I ask a Linux expert specifi... See more...
Hi @gcusello, The funny part is I have the opposite problem. I haven't given the user access to read /var/log/messages yet it seems like splunk still reads them. How do I ask a Linux expert specifically? Do you mean on this forum or elsewhere? Thanks
Hi @inventsekar, I don't want to show the actual results but here you can see there are results. Hope this helps.
Use stats to combine them index=data_set1 OR index=data_set2 | stats values(*) as * by Domain Here uses values(*) as * to collect all fields from both data sources against their common field Domain... See more...
Use stats to combine them index=data_set1 OR index=data_set2 | stats values(*) as * by Domain Here uses values(*) as * to collect all fields from both data sources against their common field Domain. You can filter then what you do or don't want, e.g. after the above, do   | where T1_Fld 1="AAB"  
See this thread https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042
Hi @jcorcorans .. one basic query.. do you want to onboard the logs or the logs already onboarded and they contain timestamp in epoch format(for example - 1720450799) using the props.conf, during ... See more...
Hi @jcorcorans .. one basic query.. do you want to onboard the logs or the logs already onboarded and they contain timestamp in epoch format(for example - 1720450799) using the props.conf, during the data onboarding/ingestion, we can specify which field got the timestamp and its format. so splunk will read the timestamp and the logs fine.  (the timestamp internal to splunk is epoch time format. when displaying on search results, Splunk converts the timestamp to human readable format) once you have ingested/onboarded the logs, and the timestamp is still showing as epoch format, then, you can use convert functions.   
at times these simple issues may give us big headache.  the shortest troubleshooting step is to resinstall the agent.. (do this only if you have min custom configs in the UF)
Wait, wait, wait. Do you mean that your UF has to keep track of over a million files? That can have a huge memory footprint. Also polling directories containing all those files can be intensive. And ... See more...
Wait, wait, wait. Do you mean that your UF has to keep track of over a million files? That can have a huge memory footprint. Also polling directories containing all those files can be intensive. And not much tuning can help here. Side note - are you sure you need to use batch input? You're showing events from tailingprocessor which is used with monitor inputs.
Non-trial Cloud service uses proper certificates issued by a well-known CA. Since trial is not meant to process any sensitive data Splunk uses self-signed certs. You can use SSL on those connections ... See more...
Non-trial Cloud service uses proper certificates issued by a well-known CA. Since trial is not meant to process any sensitive data Splunk uses self-signed certs. You can use SSL on those connections you just have to disable certificate verification on the sending side. (like curl with -k option does)
Hi, Thanks in advance! Just curious, has anybody configured github webhook to work with splunk HEC?
Hello guys,   I need to collect logs when the "admin of azure"  reset password or exclude one account. I have tried use Splunk Add-on for Microsoft Azure OR Splunk Add-on for Microsoft O365 but I ... See more...
Hello guys,   I need to collect logs when the "admin of azure"  reset password or exclude one account. I have tried use Splunk Add-on for Microsoft Azure OR Splunk Add-on for Microsoft O365 but I cant received this logs. I received the microsoft substrate management.  Is there any orther app to install to collect this ?     
Hi Mario, thanks for your response. This is what I get when requesting login.
You said yourself what the LINE_BREAKER is so Splunk breaks at the end of the line. BTW, you're using indexed extractions which might further complicate things. I'd try to write a regex for breaking... See more...
You said yourself what the LINE_BREAKER is so Splunk breaks at the end of the line. BTW, you're using indexed extractions which might further complicate things. I'd try to write a regex for breaking at every second pipe or at end of the line (if applicable). And _not_ use indexed extractions probably. Something like [^|]+\|[^|]+([\r\n|])  Bonus remark - are you sure you need crcsalt?
Thanks for the response. If I want it to work with SSL enabled, should I be using a different tier of Splunk cloud?
I'm a bit lost here. Either you miscopypasted here or it has no chance of ever matching. You have eventDate as a string produced by strftime, you use it to find something in your lookup, then you st... See more...
I'm a bit lost here. Either you miscopypasted here or it has no chance of ever matching. You have eventDate as a string produced by strftime, you use it to find something in your lookup, then you strptime a possible match to a nummeric value dateLookup. There is no way that eventDate will ever be equal to dateLookup. One is a string, another is a number.
     Hello everyone, I do not know why the classification is Threat, even though I chose endpoin
Adding to already provided answer, your idea wouldn't work because appendcols adds fields from the appended dataset to the original results row-by-row (in a as-is order). So in your case a first row ... See more...
Adding to already provided answer, your idea wouldn't work because appendcols adds fields from the appended dataset to the original results row-by-row (in a as-is order). So in your case a first row from the second lookup would "extend" first row of contents of the first lookup, second row would be glued to second row and so on. Also your "foreach cidr", since you're only specifying a single field would yield the exactly same results as if you simply wrote your eval using "cidr" instead of "<<FIELD>>". And since most probably your cidr and ip fields didn't happen to "join" so that they landed in matching rows, your result was always a no-match. I suppose you wanted to add a transposed contents of the second lookup to each result of your initial inputlookup search but it doesn't work that way.