All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, thanks for answering it's work perfectly with that  | eval latest_time=strftime(strptime(latest_time, "%FT%T.%Q%Z"), "%F %T")   Thanks again for your answer. Laurent
When someone provides you an example using the makeresults command, it is an example you can run to DEMONSTRATE the solution - in my posting I showed the example and said you need the last two lines.... See more...
When someone provides you an example using the makeresults command, it is an example you can run to DEMONSTRATE the solution - in my posting I showed the example and said you need the last two lines. The rex statement extracts fields from data. This will extract "env" and "dev" into fields a and b. Call them what you like | rex field=source ".*\/(?<a>\w+)\/(?<b>\w+)\/.*"  
So, use the rex statement I provided and change the 4 to a number of lines you want to skip and change copyofraw to the name of the field you want to create. Note that _raw is a FIELD in Splunk, not... See more...
So, use the rex statement I provided and change the 4 to a number of lines you want to skip and change copyofraw to the name of the field you want to create. Note that _raw is a FIELD in Splunk, not an environment variable - Splunk has fields that are part of your data and _raw and _time are key fields
Dear community members, I am running Splunk enterprise edition on my local windows system. Splunk web is up & running. I have created a Lambda function with a trigger cloudwatch logs where on every ... See more...
Dear community members, I am running Splunk enterprise edition on my local windows system. Splunk web is up & running. I have created a Lambda function with a trigger cloudwatch logs where on every invocation it should send the cloudwatch logs to Splunk. But while invocation I am getting connection refused error. Please find the error below. Can someone help me to understand ? ERROR Invoke Error
Thanks, this helped a lot. The docs are a bit lacking in this area.
my table looks like so : I have been trying to update the table_cell_highlighting.js in the dashboard example app so that it only highlights the percentage cell for status=200 please point me i... See more...
my table looks like so : I have been trying to update the table_cell_highlighting.js in the dashboard example app so that it only highlights the percentage cell for status=200 please point me in the right direction - thx status count percent 200 895 95.927117 404 14 1.500536 304 12 1.286174 303 12 1.286174
Hi Has anyone installed the "Add-on for Cloudflare data" app, i just after some documentation on how it is supposed work and the setup process ?
You are making lots of apparently conflicting statements.  It is important to illustrate/supplement your problem with sample/mock data and desired results, as well as a plain language statement of th... See more...
You are making lots of apparently conflicting statements.  It is important to illustrate/supplement your problem with sample/mock data and desired results, as well as a plain language statement of the logic between illustrated data and desired results.  Can you confirm that you are looking for results like dev env source dev11 env11 /test1/folder1/scripts/monitor/log/env11/dev11/Error.log dev21 env21 /test2/folder1/scripts/monitor/log/env21/dev21/Error.log In other words, you want the second-to-last segment of the path to be in a field (named dev) and the 3rd-to-last segment to be in another field (named env). If my mind-reading is correct, you can use the following   | eval mvsource = split(source, "/") | eval env = mvindex(mvsource, -3), dev = mvindex(mvsource, -2)   This is the emulation I use to produce the sample data   | makeresults format=csv data="source /test1/folder1/scripts/monitor/log/env11/dev11/Error.log /test2/folder1/scripts/monitor/log/env21/dev21/Error.log" ``` data emulation above ```   Play with it and compare with real data
Hi @fredsnertz , see in the interesting fields from your searh what's the real ile name of dataName field (probably entity.dataName) and use it. Using the json format fields are composite. Ciao. ... See more...
Hi @fredsnertz , see in the interesting fields from your searh what's the real ile name of dataName field (probably entity.dataName) and use it. Using the json format fields are composite. Ciao. Giuseppe
Hi @abi2023 , using the following search, you can see if you have full queues: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregatio... See more...
Hi @abi2023 , using the following search, you can see if you have full queues: index=_internal source=*metrics.log sourcetype=splunkd group=queue | eval name=case(name=="aggqueue","2 - Aggregation Queue", name=="indexqueue", "4 - Indexing Queue", name=="parsingqueue", "1 - Parsing Queue", name=="typingqueue", "3 - Typing Queue", name=="splunktcpin", "0 - TCP In Queue", name=="tcpin_cooked_pqueue", "0 - TCP In Queue") | eval max=if(isnotnull(max_size_kb),max_size_kb,max_size) | eval curr=if(isnotnull(current_size_kb),current_size_kb,current_size) | eval fill_perc=round((curr/max)*100,2) | bin _time span=1m | stats Median(fill_perc) AS "fill_percentage" perc90(fill_perc) AS "90_perc" max(max) AS max max(curr) AS curr by host, _time, name | where (fill_percentage>70 AND name!="4 - Indexing Queue") OR (fill_percentage>70 AND name="4 - Indexing Queue") | sort -_time If you have 100% full queues, you have to check the network bandwidth occupation and eventually modify some configuration: e.g. In limits.conf of Universal  Forwarers [thruput] maxKBps = 0 Ciao. Giuseppe
Hi @bigll , as @ITWhisperer said, this seems to be a json format so the INDEXED_ENTRACTION = json option in props.conf or the spath command (https://docs.splunk.com/Documentation/SplunkCloud/latest/... See more...
Hi @bigll , as @ITWhisperer said, this seems to be a json format so the INDEXED_ENTRACTION = json option in props.conf or the spath command (https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Spath) is the easiest solution to your requirement. Then the rex command has a different format to extract fields: the fied definition must be located inside the rex definition, as the following example using your data: | rex "before\":\"(?<summary_before>[^\"]+)\".\"after\":\"(?<summary_after>[^\"]+)" You can see how to extract and test your regex at https://regex101.com/r/22aHz1/1 Ciao. Giuseppe
* Known issue: This has been reported  and worked on by our Dev team. So far it is found to be a display issue that doesn't affect the functionality of the filter - We also acknowledge that it makes... See more...
* Known issue: This has been reported  and worked on by our Dev team. So far it is found to be a display issue that doesn't affect the functionality of the filter - We also acknowledge that it makes DS/DC harder to use for Splunk admins.   * Workaround: There's no workaround for the bug - this issue only affects filtering on machine types, so if it’s possible to arrange some filters which won’t require machine types (like names, addresses, etc), these filters work. Obviously after fix they can return to more convenient machine types filters.    * Fixed version: The fix has been scheduled and will be available for the maintenance version 9.2.2. The GA is currently for  mid June, which is subject to change according to the build schedule/load.   If you still find it not work with 9.2.2 please check it with Splunk Support.  
Thank you very much kprior201! The issue was that I was executing the search not within the ES app when I was testing it, but in the Search and Reporting app. I did not have some of the manually extr... See more...
Thank you very much kprior201! The issue was that I was executing the search not within the ES app when I was testing it, but in the Search and Reporting app. I did not have some of the manually extracted fields in my ES app, once I added them, the correlation search worked well! 
Also   /test1/folder1/scripts/monitor/log/env/dev/Error.log is dynamic source field value and not hardcoded value /test1/folder1/scripts/monitor/log/env/dev/Error.log so I need to integrate index c... See more...
Also   /test1/folder1/scripts/monitor/log/env/dev/Error.log is dynamic source field value and not hardcoded value /test1/folder1/scripts/monitor/log/env/dev/Error.log so I need to integrate index command with makeresults something like this index="monitoring" source="/test1/folder1/.scripts/monitor/log/env/dev/Error.log" extract values of env and dev...that can be dynamic to separate fields.
yes   /test1/folder1/.scripts/monitor/log/env/dev/Error.log   I want  field 1=value of env field 2=value of dev   as there is scope of this changing later.
sorry for confusion. I want to skip 1st 7 lines of environment variable "_raw" and then copy that info from 7th line  to new variable "_raw_refined"  
The makeresults / eval is an example you can run to show you how this works - in your first post you said you wanted lines 5 and 6 - now you want to skip the first 7 lines and your post says you want... See more...
The makeresults / eval is an example you can run to show you how this works - in your first post you said you wanted lines 5 and 6 - now you want to skip the first 7 lines and your post says you want to skip first 10 lines? Confused... Please give more precise information about your requirement
Not clear what you are saying - your original post says want the word "dev", but you also want the word "env" also? Is "env" something that can change?
I want to execute this skip first 7 lines on splunk default environment variable "_raw" and not on  eval _raw="line 1 line 2 line 3 line 4 line 5 line 6"  
Thanks for your response But my file location is /test1/folder1/.scripts/monitor/log/env/dev/Error.log So interested to get both values of  env and dev