All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello Actually it was installed on one search head only not the deployer
Hi @Keerthi, I suppose that you have a script that launches the API, manually launch again your script, I don't know how your script runs, but eventually modifying it to take also the old data, you... See more...
Hi @Keerthi, I suppose that you have a script that launches the API, manually launch again your script, I don't know how your script runs, but eventually modifying it to take also the old data, you shuld be able to re-run it. Ciao. Giuseppe
Hi @aasserhifni, I suppose that you have a Search Head Cluster, did you removed the app from the list in the $SPUNK_HOME/etc/shcluster-apps/apps folder in the SH-Deployer and then did you run the d... See more...
Hi @aasserhifni, I suppose that you have a Search Head Cluster, did you removed the app from the list in the $SPUNK_HOME/etc/shcluster-apps/apps folder in the SH-Deployer and then did you run the deploy command on the Deployer? Ciao. Giuseppe
hi , my index stopped running 3 months ago. on checking i came to know that the data was not ingested because of API token issue which got expired. . i fixed it now. i want the data to be loaded agai... See more...
hi , my index stopped running 3 months ago. on checking i came to know that the data was not ingested because of API token issue which got expired. . i fixed it now. i want the data to be loaded again. how do i run the Index
@gcusello I already did that but without any useful result    
      | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3)         For the above, should t... See more...
      | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3) | eval offset = mvappend("24", "16", "8") | eval segment_rev = mvrange(0, 3)         For the above, should the second set have been given a different value for the field?  Additionally, when I run the example, I received: 04-18-2024 13:36:06.590 ERROR EvalCommand [102993 searchOrchestrator] - The 'bit_shift_left' function is unsupported or undefined. I believe the function requires 9.2.0+  
Hi @aasserhifni, you can manually remove an app from a stand alone Search Head, removing the folder and restarting Splunk. If you have a SH-Cluster, you have to remove it from the Deployer ($SPLUNK... See more...
Hi @aasserhifni, you can manually remove an app from a stand alone Search Head, removing the folder and restarting Splunk. If you have a SH-Cluster, you have to remove it from the Deployer ($SPLUNK_HOME/etc/shcluster-apps/apps folder) and then push the apps. Ciao. Giuseppe
Another bump. I've run into this issue, too.
Hi @Ryan.Paredez  and @Troy.Partain , Thank you for the reply, that clarifies the issue for me, I'll be more careful with my demo presentations in the future, especially with potential customers. ... See more...
Hi @Ryan.Paredez  and @Troy.Partain , Thank you for the reply, that clarifies the issue for me, I'll be more careful with my demo presentations in the future, especially with potential customers. Hope you both have a great day!
The split function is extracting the desired field, but then rex reduces it to the part before the first underscore (_).  Remove the rex command and the query should work as expected. In props..conf... See more...
The split function is extracting the desired field, but then rex reduces it to the part before the first underscore (_).  Remove the rex command and the query should work as expected. In props..conf, add a transform that uses INGEST_EVAL INGEST_EVAL = aws_service=mvindex(split(source,":"),2)  
I  tried to remove the threatq application files from /etc/apps inside the search head but every time I  remove them, they keep appearing again even I removed its files from /etc/users. Is there any ... See more...
I  tried to remove the threatq application files from /etc/apps inside the search head but every time I  remove them, they keep appearing again even I removed its files from /etc/users. Is there any solution for it? 
OK, is the "Dataframe row :" part really a part of the event or just a header you posted before the actual event. Anyway, it seems like it's a relatively well-formed (unless I'm missing something) j... See more...
OK, is the "Dataframe row :" part really a part of the event or just a header you posted before the actual event. Anyway, it seems like it's a relatively well-formed (unless I'm missing something) json embedded (and escaped) within another json. Possibly prepended with that "Dataframe row :" header. I'd say just cut the header if applicable, parse the outer json, extract the inner json, split if needed into multiple events, then spath the inner json(s). And don't use regexes to manipulate structured data unless you really can't avoid it.
Hi All, I want to extract service name from sourcetype="aws:metadata" and source field. Example : 434531263412:eu-central-1:elasticache_describe_reserved_cache_nodes_offerings I am using thi... See more...
Hi All, I want to extract service name from sourcetype="aws:metadata" and source field. Example : 434531263412:eu-central-1:elasticache_describe_reserved_cache_nodes_offerings I am using this query :     index=* sourcetype=aws:metadata | eval aws_service=mvindex(split(source,":"),2) | rex field=aws_service "(?<aws_service>[^_]+)" | table aws_service source| dedup aws_service     Using this I will get result :  elasticache. But in case of "434531263412:us-west-2:nat_gateways" its just extracting nat. But it should be gateways. S Similarly in 434531263412:eu-central-1:application_load_balancers, its extracting application. I was thinking if we can check for the keyword and update the value. I want to add this in props.conf file so aws_service field gets created from source. Please can anyone of you help me how can I achieve this ? Regards, PNV
If you examine and try to understand the solution I posted, you will see there is a not equals condition on the regex. Perhaps you could have figured out for yourself that you could simply change not... See more...
If you examine and try to understand the solution I posted, you will see there is a not equals condition on the regex. Perhaps you could have figured out for yourself that you could simply change not equals to equals! | regex Name="NODATA"
Hi ITWhisperer, Thank you for your response. But the query which you have provided is eliminating the job name that contains NODATA string, but we only need that job name that contains NODATA strin... See more...
Hi ITWhisperer, Thank you for your response. But the query which you have provided is eliminating the job name that contains NODATA string, but we only need that job name that contains NODATA string, rest all jobs, we can eliminate. Kindly help us on this. Thank you
Wait a second. What does it have to do with any events returned from the index? So far you're only operating on the data from the lookup. Also, unless for displaying (but even then it's... a disputa... See more...
Wait a second. What does it have to do with any events returned from the index? So far you're only operating on the data from the lookup. Also, unless for displaying (but even then it's... a disputable practice), you don't want to merge values into multivalued fields this way. You'll effectively get two multivalued fields with no connection between them whatsoever. So if you wanted to sort one of them (for example to list passed exams before failed ones or vice-versa) you can't reorder the other field the same way. They are just two separate fields with multivalued contents but there is no relationship between those contents. (and should any of those values prove to be empty, the whole field will "squash" so you will not have any spaces between values).
You are on the right lines with streamstats - please share some sample anonymised event for us to work with to find you a solution.
Try something like this:  <your search> ... | eval exam_result=mvzip(ExamID, Status, "~") | fields - ExamID Status | mvexpand exam_result | eval ExamID=mvindex(split(exam_result, "~"), 0), Status... See more...
Try something like this:  <your search> ... | eval exam_result=mvzip(ExamID, Status, "~") | fields - ExamID Status | mvexpand exam_result | eval ExamID=mvindex(split(exam_result, "~"), 0), Status=mvindex(split(exam_result, "~"), 1) | eval extra_status = if(ExamID>=120 AND ExamID<=125 AND match(Status, "Pass"), "GOOD", null())
You could try extracting each job as a complete event, before extracting the individual fields. You can then filter out the jobs you don't want (btw, your regex seems to have way too many backslashes... See more...
You could try extracting each job as a complete event, before extracting the individual fields. You can then filter out the jobs you don't want (btw, your regex seems to have way too many backslashes, but you may need them if your actual data is different to the example you shared) | rex max_match=0 "(?<job>\\\\\"jobname\\\\\":\s*\\\\\"[^\\\]+.*?\\\\\"status\\\\\":\s*\\\\\"ENDED OK.*?Timestamp\\\\\": \\\\\"\d+\s*\d+\:\d+\:\d+.*?execution_time_in_seconds\\\\\": \\\\\"[\d\.\-]+)" | mvexpand job | rex field=job "\\\\\"jobname\\\\\":\s*\\\\\"(?<Name>[^\\\]+).*?\\\\\"status\\\\\":\s*\\\\\"(?<State>ENDED OK).*?Timestamp\\\\\": \\\\\"(?<TIME>\d+\s*\d+\:\d+\:\d+).*?execution_time_in_seconds\\\\\": \\\\\"(?<EXECUTION_TIME>[\d\.\-]+)" | regex Name!="NODATA" | table TIME Name State EXECUTION_TIME  
I can find my DBConnect input inside the "/app/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log" log.  It pretty much runs a "Select * from a table" every 4 hours and sends the results to ... See more...
I can find my DBConnect input inside the "/app/splunk/var/log/splunk/splunk_app_db_connect_job_metrics.log" log.  It pretty much runs a "Select * from a table" every 4 hours and sends the results to an index.  It always runs to completion with a "status=COMPLETED" but at times it finishes with an 'error_count > 0' and we notice that we don't get those log events added to the index for that run.  Where can I see that these errors are and why are they generated?