All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello. I have some issues with field parsing for the CSV files using props configuration. I should be getting 11 fields for each of the events/rows, but parsing is giving me 17 fields. Here are the ... See more...
Hello. I have some issues with field parsing for the CSV files using props configuration. I should be getting 11 fields for each of the events/rows, but parsing is giving me 17 fields. Here are the 3 sample events (First row is header row) from that CSV file and the props.conf file is also provided below: Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8,Col9,Col10,Col11 APIDEV,4xs54,000916,DEV,Update,Integrate,String\,Set\,Number\,ID,Standard,2024-07-10T23:10:45.001Z,Process_TIME\,URI\,Session_Key\,Services,Hourly APITEST,4ys34,000916,TEST,Update,Integrate,String\,Set\,Number\,String,Typicall\,Response,2024-07-10T23:10:45.021Z,CPU_TIME\,URI\,Session_Key\,Type\,Request,Monthly APITEST,4ys34,000916,DEV,Insert,Integrate,Char\,Set\,System\,ID,On_Demand,2024-07-10T23:10:45.051Z,CPU_TIME\,URI\,Session_Key\,Services,Hourly   *Bold texts in each of the events should count one field    props.conf [mypropscon] SHOULD_LINEMERGE=False LINE_BREAKER=([\r\n]+) INDEXED_EXTRACTIONS=CSV KV_MODE=none disabled=false TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%QZ HEARDER_FIELD_LINE_NUMBER=1   Any recommendation to resolve that issue will be highly appreciated. Thank you so much for your support as always.       
Hi @gcusello, The funny part is I have the opposite problem. I haven't given the user access to read /var/log/messages yet it seems like splunk still reads them. How do I ask a Linux expert specifi... See more...
Hi @gcusello, The funny part is I have the opposite problem. I haven't given the user access to read /var/log/messages yet it seems like splunk still reads them. How do I ask a Linux expert specifically? Do you mean on this forum or elsewhere? Thanks
Hi @inventsekar, I don't want to show the actual results but here you can see there are results. Hope this helps.
Use stats to combine them index=data_set1 OR index=data_set2 | stats values(*) as * by Domain Here uses values(*) as * to collect all fields from both data sources against their common field Domain... See more...
Use stats to combine them index=data_set1 OR index=data_set2 | stats values(*) as * by Domain Here uses values(*) as * to collect all fields from both data sources against their common field Domain. You can filter then what you do or don't want, e.g. after the above, do   | where T1_Fld 1="AAB"  
See this thread https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042
Hi @jcorcorans .. one basic query.. do you want to onboard the logs or the logs already onboarded and they contain timestamp in epoch format(for example - 1720450799) using the props.conf, during ... See more...
Hi @jcorcorans .. one basic query.. do you want to onboard the logs or the logs already onboarded and they contain timestamp in epoch format(for example - 1720450799) using the props.conf, during the data onboarding/ingestion, we can specify which field got the timestamp and its format. so splunk will read the timestamp and the logs fine.  (the timestamp internal to splunk is epoch time format. when displaying on search results, Splunk converts the timestamp to human readable format) once you have ingested/onboarded the logs, and the timestamp is still showing as epoch format, then, you can use convert functions.   
at times these simple issues may give us big headache.  the shortest troubleshooting step is to resinstall the agent.. (do this only if you have min custom configs in the UF)
Wait, wait, wait. Do you mean that your UF has to keep track of over a million files? That can have a huge memory footprint. Also polling directories containing all those files can be intensive. And ... See more...
Wait, wait, wait. Do you mean that your UF has to keep track of over a million files? That can have a huge memory footprint. Also polling directories containing all those files can be intensive. And not much tuning can help here. Side note - are you sure you need to use batch input? You're showing events from tailingprocessor which is used with monitor inputs.
Non-trial Cloud service uses proper certificates issued by a well-known CA. Since trial is not meant to process any sensitive data Splunk uses self-signed certs. You can use SSL on those connections ... See more...
Non-trial Cloud service uses proper certificates issued by a well-known CA. Since trial is not meant to process any sensitive data Splunk uses self-signed certs. You can use SSL on those connections you just have to disable certificate verification on the sending side. (like curl with -k option does)
Hi, Thanks in advance! Just curious, has anybody configured github webhook to work with splunk HEC?
Hello guys,   I need to collect logs when the "admin of azure"  reset password or exclude one account. I have tried use Splunk Add-on for Microsoft Azure OR Splunk Add-on for Microsoft O365 but I ... See more...
Hello guys,   I need to collect logs when the "admin of azure"  reset password or exclude one account. I have tried use Splunk Add-on for Microsoft Azure OR Splunk Add-on for Microsoft O365 but I cant received this logs. I received the microsoft substrate management.  Is there any orther app to install to collect this ?     
Hi Mario, thanks for your response. This is what I get when requesting login.
You said yourself what the LINE_BREAKER is so Splunk breaks at the end of the line. BTW, you're using indexed extractions which might further complicate things. I'd try to write a regex for breaking... See more...
You said yourself what the LINE_BREAKER is so Splunk breaks at the end of the line. BTW, you're using indexed extractions which might further complicate things. I'd try to write a regex for breaking at every second pipe or at end of the line (if applicable). And _not_ use indexed extractions probably. Something like [^|]+\|[^|]+([\r\n|])  Bonus remark - are you sure you need crcsalt?
Thanks for the response. If I want it to work with SSL enabled, should I be using a different tier of Splunk cloud?
I'm a bit lost here. Either you miscopypasted here or it has no chance of ever matching. You have eventDate as a string produced by strftime, you use it to find something in your lookup, then you st... See more...
I'm a bit lost here. Either you miscopypasted here or it has no chance of ever matching. You have eventDate as a string produced by strftime, you use it to find something in your lookup, then you strptime a possible match to a nummeric value dateLookup. There is no way that eventDate will ever be equal to dateLookup. One is a string, another is a number.
     Hello everyone, I do not know why the classification is Threat, even though I chose endpoin
Adding to already provided answer, your idea wouldn't work because appendcols adds fields from the appended dataset to the original results row-by-row (in a as-is order). So in your case a first row ... See more...
Adding to already provided answer, your idea wouldn't work because appendcols adds fields from the appended dataset to the original results row-by-row (in a as-is order). So in your case a first row from the second lookup would "extend" first row of contents of the first lookup, second row would be glued to second row and so on. Also your "foreach cidr", since you're only specifying a single field would yield the exactly same results as if you simply wrote your eval using "cidr" instead of "<<FIELD>>". And since most probably your cidr and ip fields didn't happen to "join" so that they landed in matching rows, your result was always a no-match. I suppose you wanted to add a transposed contents of the second lookup to each result of your initial inputlookup search but it doesn't work that way.
I have installed splunk otel collector on lnux(ubuntu). But when i run to start the service and check the status i see below logs. I removed all the reference to environment variable on agent_config.... See more...
I have installed splunk otel collector on lnux(ubuntu). But when i run to start the service and check the status i see below logs. I removed all the reference to environment variable on agent_config.yaml. keeping basic hostmetrics, oltp receivers and file exporter. urgent need to make it work, any help appreciated.       ● splunk-otel-collector.service - Splunk OpenTelemetry Collector Loaded: loaded (/lib/systemd/system/splunk-otel-collector.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2024-07-10 15:12:54 EDT; 13min ago Process: 246312 ExecStart=/usr/bin/otelcol $OTELCOL_OPTIONS (code=exited, status=1/FAILURE) Main PID: 246312 (code=exited, status=1/FAILURE) Jul 10 15:12:54 patil-ntd systemd[1]: splunk-otel-collector.service: Scheduled restart job, restart counter is at 5. Jul 10 15:12:54 patil-ntd systemd[1]: Stopped Splunk OpenTelemetry Collector. Jul 10 15:12:54 patil-ntd systemd[1]: splunk-otel-collector.service: Start request repeated too quickly. Jul 10 15:12:54 patil-ntd systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Jul 10 15:12:54 patil-ntd systemd[1]: Failed to start Splunk OpenTelemetry Collector.      
Yes, it doesn't work with a SSL-enabled (it's SSL, or more precisely, TLS, not ssh) client (in your case curl) because of a self-signed cert. It's normal with Cloud trial instances. And your question... See more...
Yes, it doesn't work with a SSL-enabled (it's SSL, or more precisely, TLS, not ssh) client (in your case curl) because of a self-signed cert. It's normal with Cloud trial instances. And your question is?
As @JohnEGones suggested, cidrmatch is not the answer.  Set MATCH_TYPE(cidr) in cidr.csv following that document, then use lookup command. | inputlookup IP_add.csv | rename "IP Address" as ip | look... See more...
As @JohnEGones suggested, cidrmatch is not the answer.  Set MATCH_TYPE(cidr) in cidr.csv following that document, then use lookup command. | inputlookup IP_add.csv | rename "IP Address" as ip | lookup cidr.csv cidr as ip output cidr | eval match=if(isnull(cidr), "No Match", cidr)