All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

m using 8.0.5
@ITWhisperer wrote: Splunk's version of arrays is multivalue field, so if you change you input to a multivalue field, you could do something like this   | eval Tag = split(lower("Tag3,Tag4"),"... See more...
@ITWhisperer wrote: Splunk's version of arrays is multivalue field, so if you change you input to a multivalue field, you could do something like this   | eval Tag = split(lower("Tag3,Tag4"),",") | spath | foreach *Tags{} [| eval field="<<FIELD>>" | foreach <<FIELD>> mode=multivalue [| eval tags=if(isnull(tags),if(mvfind(Tag,lower('<<ITEM>>')) >= 0, field, null()),mvappend(tags, if(mvfind(Tag,lower('<<ITEM>>')) >= 0, field, null())))] ] | stats values(tags)   Thank you for your response and the example, currently it is returnin 0 results for me. Could it have something to do with my Splunk version? I a
@danielbb Please, don't forget to accept this solution if it fits your needs. 
spath works because you're extracting just the json part with the rex command and only apply spath on that json field. Yes, you can create a macro but you will still need to manually execute that ma... See more...
spath works because you're extracting just the json part with the rex command and only apply spath on that json field. Yes, you can create a macro but you will still need to manually execute that macro in your search. Setting KV_MODE to json will not hurt (maybe except for some minor performance hit) but will not help.
And can I try giving kv_mode = JSON just to check my luck? What will be the consequences if it don't work? Please guide me through steps....
Hi everyone,  I've revently tested the new Splunk AI feature within Splunk ITSI to define thresholds based on historic Data/KPI points. ("Test" as in I literally created very obvious dummy-data for ... See more...
Hi everyone,  I've revently tested the new Splunk AI feature within Splunk ITSI to define thresholds based on historic Data/KPI points. ("Test" as in I literally created very obvious dummy-data for the AI to process and find thresholds for. Sort of Trust test of the AI really does find usuable thresholds. ) Example:  Every 5 minutes the KPI takes the latest value which I've set to correspond with the current weekday (+ minimal variance) For example: All KPI values on Mondays are within the range of 100-110, Tuesdays 200-210, Wednesdays 300-310 and so forth.  This is a preview of the data:  Now after a successful backfill of 30 days I would have expected the AI to see that each weekday needs its own time policy and thresholds.  However the result was this:  No weekdays detected, and instead it finds time policies for every 4hours regardless of days?  By now I've tried all possible adjustments I could think of (increasing the number of data points, greater differences between data points, other algorithmn, waiting for the next in hopes it would recalibrate itself over midnight, etc.) Hardly any improments at all and the thresholds are not usuable like this as it would not be able to detect outliers on mondays (expected values 100-110, outlier would 400 but not detected as it's still within thresholds. Thus my question to the community: Does anyone have some ideas/suggestions how I could make the AI understand the simple idea of "weekly time policies" and how I could tweak it? (Aside from doing everything manually and ditching the AI-Idea as a whole)?  Does anyone have good experience with Splunk AI defining Thresholds and if so what were the use cases?
Yes it's the latter case. But search query I mentioned above (spath) is working perfectly. Is there any way I can achieve this? If this is not possible, can I make macro of that query and use it in s... See more...
Yes it's the latter case. But search query I mentioned above (spath) is working perfectly. Is there any way I can achieve this? If this is not possible, can I make macro of that query and use it in search query ? I don't know how customer feels to it.     
I know it's a json. But is it the whole event? Or does the event have additional pieces? So does the event look like this: { "a":"b", "c":"d" } or more like this <12>Nov 12 20:15:12 localhost what... See more...
I know it's a json. But is it the whole event? Or does the event have additional pieces? So does the event look like this: { "a":"b", "c":"d" } or more like this <12>Nov 12 20:15:12 localhost whatever: data={"a":"b","c":"d"} and you only want the json part parsed? In the former case, it's enough to set KV_MODE to json (but KV_MODE=json doesn't handle multilevel field names). If it's the latter - that's the situation I described - Splunk cannot handle the structured _part_ automatically.
Hi @PickleRick , It's a structured json query we have and it is not extracting field values automatically. Everytime we need to give command in search which is not the customer wants. They want this... See more...
Hi @PickleRick , It's a structured json query we have and it is not extracting field values automatically. Everytime we need to give command in search which is not the customer wants. They want this extraction to be default. 
Yes, and the issues follows over to the cloned entry.
Unfortunately, at this moment Splunk can only do automatic structured data extraction if the whole event is well-formed structured data. So if your whole event is a json blob - Splunk can interpret i... See more...
Unfortunately, at this moment Splunk can only do automatic structured data extraction if the whole event is well-formed structured data. So if your whole event is a json blob - Splunk can interpret it automatically. If it isn't because it contains some header or footer, it's a no-go. There is an open idea about this on ideas.splunk.com - https://ideas.splunk.com/ideas/EID-I-208 Feel free to upvote it. For now all you can do is to trim your original event to contain only the json part. (But then you might lose some data, I know).
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to ... See more...
Description: Hello, I am experiencing an issue with the "event_id" field when transferring notable events from Splunk Enterprise Security (ES) to Splunk SOAR. Details: When sending the event to SOAR using an Adaptive Response Action (Send to SOAR), the event is sent successfully, but the "event_id" field does not appear in the data received in SOAR. Any assistance or guidance to resolve this issue would be greatly appreciated. Thank you
Yes, this is the way. Thanks @ITWhisperer  this is exactly what I was looking for.
Hi @pumphreyaw , @mattymo  Now I am stuck in same problem. We don't have HF actually. We have deployment server which push apps to our manager and deployer. From there manager will push apps to peer... See more...
Hi @pumphreyaw , @mattymo  Now I am stuck in same problem. We don't have HF actually. We have deployment server which push apps to our manager and deployer. From there manager will push apps to peers nodes. We have 3 search heads and a deployer.  Where I need to give these configurations to extract json data? Can you please help me step by step?
It shouldn't hurt. If you escape something that doesn't need escaping, nothing bad should happen. It's just ugly.
Hello @dbray_sd  Have you tried by cloning older input and creating new one ? Sometimes checkpoint fails during upgrade but cloning new input will create checkpoint and possibly resolves your issue.
There can be multiple reason behind streamfwd.exe not running, you should file support case to get this fixed.
Hi, Not sure if you've tried this but looks like a similar issue in a lower version upgrade. https://community.splunk.com/t5/Installation/After-upgrading-from-Splunk-6-2-3-to-6-3-0-why-am-I-getti... See more...
Hi, Not sure if you've tried this but looks like a similar issue in a lower version upgrade. https://community.splunk.com/t5/Installation/After-upgrading-from-Splunk-6-2-3-to-6-3-0-why-am-I-getting/m-p/252282 Cheers Meaf
No where I need to specify this? What this query will do? Please explain
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the... See more...
Hi, We are running a Splunk Enterprise HWF with a generic s3 input to fetch object from a s3 bucket, however each time we try to move this input onto a new identical HWF we have issues getting the same data from the same bucket. Both instances are on Splunk 9.2 however the Splunk AWS TA versions are different. Both are pipeline managed so have all the same config / certs. The only difference we can see if that in the aws ta input log the 'broken' input never creates the S3 Connection before fetching the s3 objects and seems to think the bucket is empty. Working input 2025-01-15 10:25:09,124 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:162 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 11:25:09+00:00" 2025-01-15 10:25:09,125 level=INFO pid=5806 tid=Thread-6747 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_get_bucket:364 | bucket_name="bucketname" datainput="input", start_time=1736918987 job_uid="8888", phase="fetch_key" | message="Create new S3 connection." 2025-01-15 10:25:09,130 level=INFO pid=5806 tid=Thread-6841 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=s3_key_processer.py:_do_index:148 | bucket_name="bucketname" datainput="input" last_modified="2025-01-15T04:00:41.000Z" phase="fetch_key" job_uid="8888" start_time=1736918987 key_name="bucketobject" | message="Indexed S3 files." size=819200 action="index" Broken input 2025-01-15 12:00:33,369 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.common.aws_credentials pos=aws_credentials.py:load:217 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="8888", phase="fetch_key" | message="load credentials succeed" arn="AWSARN" expiration="2025-01-15 13:00:33+00:00" 2025-01-15 12:00:33,373 level=INFO pid=3157753 tid=Thread-4 logger=splunk_ta_aws.modinputs.generic_s3.aws_s3_data_loader pos=aws_s3_data_loader.py:_fetch_keys:378 | datainput="input" bucket_name="bucketname", start_time=1736942432 job_uid="88888", phase="fetch_key" | message="End of fetching S3 objects." pending_key_total=0 Unsure, where to go from here as we have tried this on multiple new machines.  Thanks Meaf