All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you! If I could ask one more question I'm now wanting to filter that out a bit. So when looking that up I'm told to do | where user!="SYSTEM" or something like that EventCode=4624 user!="*$... See more...
Thank you! If I could ask one more question I'm now wanting to filter that out a bit. So when looking that up I'm told to do | where user!="SYSTEM" or something like that EventCode=4624 user!="*$" | timechart span=1d dc(user) as "Unique Users" | where user!="SYSTEM" So that has me think 2 questions. If != is the sign for EXCLUDE then why does this above statement work user!="*$" and second question since it DOES work how can I exclude multiple values? example: | where user!="SYSTEM","Administrator","Guest", etc?
I tried both index=MyPCF | fields server instance cpu_percentage | eval cpu_percentage=round(cpu_percentage,2) | eval server_instance = 'server' + "_" + 'instance' | timechart mAX(cpu_percentage) a... See more...
I tried both index=MyPCF | fields server instance cpu_percentage | eval cpu_percentage=round(cpu_percentage,2) | eval server_instance = 'server' + "_" + 'instance' | timechart mAX(cpu_percentage) as CPU_Percentage by server_instance usenull=true limit=0 | foreach * [| fieldformat "<<FILED>>"=cpu_percentage . "%"] And index=MyPCF | fields server instance cpu_percentage | eval cpu_percentage=round(cpu_percentage,2) | eval server_instance = 'server' + "_" + 'instance' | timechart mAX(cpu_percentage) as CPU_Percentage by server_instance usenull=true limit=0 | foreach * [| eval "<<FILED>>"=cpu_percentage ."%"] below is the output I get: _time Server_1 Server_2 Server_3 2024-03-25T14:00:00.000-0400 5.14 1.98 3.83 2024-03-25T14:01:00.000-0400 2.93 1.64 3.65 2024-03-25T14:02:00.000-0400 3.33 2.28 3.31 2024-03-25T14:03:00.000-0400 3.54 2.11 3.67 2024-03-25T14:04:00.000-0400 4.02 1.94 3.81 2024-03-25T14:05:00.000-0400 4.3 3.58 3.78 2024-03-25T14:06:00.000-0400 3.13 2.72 3.46 2024-03-25T14:07:00.000-0400 2.58 2.07 3.62 2024-03-25T14:08:00.000-0400 2.33 1.77 3.67 2024-03-25T14:09:00.000-0400 2.66 1.75 4.01 2024-03-25T14:10:00.000-0400 3.2 1.94 4.58 2024-03-25T14:11:00.000-0400 2.76 1.59 4.57
What about defining this on a Cloud Index you create ?   I get a defaut app assigned and there is no filed available to edit this. Thanks
How do you copy a knowledge object from one app to another in Splunk
Do I need do add anything else other than the inputlookup? I am still unsuccessful with getting a match when I know there are a ton.
Yes, its an IP address.
Hi @pop345, what's te content od the lb field? Isupposed that it's an IP address. Anyway, in the first example, you should rename the lb field name to all the fields in the main search (src, dest)... See more...
Hi @pop345, what's te content od the lb field? Isupposed that it's an IP address. Anyway, in the first example, you should rename the lb field name to all the fields in the main search (src, dest). In the second example, you perform a full text search on _raw. ciao. Giuseppe 
Hi @selvaraj4u, I'm not sure on Dashboard Studio, but with Classic dashboards, you should try: index=xyz query1 latest=now() [ search index=xyz query2 earliest=global_time.earliest latest=global_ti... See more...
Hi @selvaraj4u, I'm not sure on Dashboard Studio, but with Classic dashboards, you should try: index=xyz query1 latest=now() [ search index=xyz query2 earliest=global_time.earliest latest=global_time.latest] In other words, you shoud force the time borders different than Time Picker. Ciao. Giuseppe
I try to avoid join where possible, but I can't make this query work without it.  See if this helps you. index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd (component=Aggregator... See more...
I try to avoid join where possible, but I can't make this query work without it.  See if this helps you. index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd (component=AggregatorMiningProcessor OR component=LineBreakingProcessor OR component=DateParserVerbose OR component=MetricSchemaProcessor OR component=MetricsProcessor) (log_level=WARN OR log_level=ERROR OR log_level=FATAL) | rex field=event_message "\d*\|(?<st>[\w\d:-]*)\|\d*" | eval data_sourcetype=coalesce(data_sourcetype, st) | rename data_sourcetype as sourcetype | fields sourcetype event_message component | join sourcetype [| tstats count where index=* by sourcetype, index | fields - count ] | table sourcetype component event_message index  
the text "NaN" (does occasionally happen when the source is a SQL query) either  This explains it.  I was wondering why typeof(num) should be Number when num had value "NaN". Whoever wrote that co... See more...
the text "NaN" (does occasionally happen when the source is a SQL query) either  This explains it.  I was wondering why typeof(num) should be Number when num had value "NaN". Whoever wrote that code in typeof must have SQL in mind.  A SQL query only returns "NaN" when the data type is numeric.  If you are programming against results from a SQL query in any language, you always need to write a logic for this possible return.
The stats command can do that, although I'm not sure how it will handle "N/A". | stats avg('Column 4') as "Column 2" by 'Column 1'  
The distinct_count (dc) function will give the unique values of a field. ErrorCode=4624 user!="*$" | timechart span=1d dc(user) as "Unique Users"    
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5  ... See more...
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5   I want to calculate the average for all of the values in Column 4 (that aren't N/A) that have the same value in Column 1. Then I want to output that as a table: Column 1 Column 2 Value 1 37.5 Value 2 40
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do... See more...
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do i do it ? i have configured an input field for time with token - global_time my query looks like this  index=xyz query1 earliest=global_time.earliest latest=now() [search index=xyz query2 earliest=global_time.earliest latest=global_time.latest] this is not working - can you suggest how to make this work
This corrected itself, after I toggled the server's role from standalone to distributed, then back to standalone -- then clients started showing up on the UI. Monitoring Console, General Setup, Mode... See more...
This corrected itself, after I toggled the server's role from standalone to distributed, then back to standalone -- then clients started showing up on the UI. Monitoring Console, General Setup, Mode (top left). Go figure.
FWIW, happening here as well, with 9.2.0.1. Checked all The Things mentioned in that doc everyone keeps referencing, including those stanzas mentioned numerous times here. Another symptom of mine i... See more...
FWIW, happening here as well, with 9.2.0.1. Checked all The Things mentioned in that doc everyone keeps referencing, including those stanzas mentioned numerous times here. Another symptom of mine is that the ForwarderManager (deployer) doesn't appear in my monitored servers in the SplunkManager (aka Master).
I've tried this before but wasn't successful in finding any matches, hence I resorted to an eval. Anyway you can expand on the examples you provided? Is there an eval statement or search that I shoul... See more...
I've tried this before but wasn't successful in finding any matches, hence I resorted to an eval. Anyway you can expand on the examples you provided? Is there an eval statement or search that I should be using?
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This in... See more...
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This info is not being presented in the Splunk Cloud monitoring console and is an indicator that indexed events are being dropped. index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Error parsing events from message content" | eval bytesRemaining=trim(bytesRemaining,":") | stats sum(bytesRemaining) as bytesNotIndexed What these errors are telling us is that some SQSSmartbusInputWorker process is parsing events and that there is some type of invalid field, or value in the data, in our case _subsecond.  When this process hits the invalid value, it appears to drop everything else in the stream (i.e. bytesRemaining).  So this is also to say that bytesRemaining contains events that were sent to Splunk Cloud, but not indexed.   When this error occurs,  Splunk cloud writes the failed info to an SQS DLQ in S3 which can be observed using: index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Successfully sent a SQS DLQ message to S3 with location" Curious if anyone else out there is experiencing the same issue.  SQSSmartbusInputWorker  doesn't appear in any of the indexing documents, but does appear to be very important to the ingest process.
Hey @padresman  Will try your example. Gotta be very careful that your expression fields match the capture group you use, as it will store it in "attributes."capture group value" by default.  Also,... See more...
Hey @padresman  Will try your example. Gotta be very careful that your expression fields match the capture group you use, as it will store it in "attributes."capture group value" by default.  Also, make sure to use golang regex on regex101. though your regex appears to be fine.  Also its wise to iterate and NOT remove the fields you make to see what they look like when they arrive at splunk. Can help make sure your value is what you think it is.....
Quick update.  I changed the format block to use format_3:formatted_data instead of formatted_data.*.  The note looks a lot nicer, but it's still 500 items.