All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalP... See more...
Hello,  I am running two separate queries to extract values: First query   index=abc status=error | stats count AS FailCount   Second query   index=abc status=planning | stats count AS TotalPlanned   Both queries are working well and giving expected results.  When I combine them using sub search, I am getting error:   index=abc status=error | stats count AS FailCount [ search index=abc status=planning | stats count AS TotalPlanned | table TotalPlanned ] | eval percentageFailed=(FailCount/TotalPlanned)*100   Error message:   Error in 'stats' command: The argument '(( TotalPlanned=761 )) is invalid'   Note: The count 761 is a valid count for TotalPlanned, so it did perform that calculation. 
You are right, I had issues in my raw data. Thanks very much for the help!!
WINNER WINNER! Thank you very much!  
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of f... See more...
User receiving duplicated field names in splunk result for example when i run a search i get an output for the       field1=Value1 and then when the user runs the same search he gets an output of field1 = "field1=value1" Does any one knows what i need to do to help the user get the same result as mine 
Are you still using a 6.5 forwarder in 2024? I suppose not. Try starting a new thread with a more detailed description of your problem.
Well, the more stronger assumptions you can make about the data the easier it gets. If you have to care about exceptions it's getting complicated quickly. Take a typical example of key="string valu... See more...
Well, the more stronger assumptions you can make about the data the easier it gets. If you have to care about exceptions it's getting complicated quickly. Take a typical example of key="string value" or key=number_value The latter form is obviously pretty easy to parse (?<key>\S+)=(?<value>\d+) The former is way more complicated to do well. If you simply do (?<key>\S+)="(?<value>.*)" You'll capture way over the boundary of the k-v pair if you have many of them. If you go lazy (?<key>\S+)="(?<value>.*?)" you'll stop matching in the middle if you have an escaped quote within your value. (The same happens if instead of matching for .* lazily you match for [^"]* because of course you encounter the same quote). Ok. So how about we match for everything not being a quote or a quote preceeded by a backslash. (?<key>\S+)="(?<value>([^"]|\\")*)" Ok. Nice idea but what if your value ends with a backslash (expressed as double backslash due to escaping)? You'll miss it because it has the \" sequence and go way past the end of the value. So maybe we should try making a negative lookback so that a backslash preceeding a quote cannot be prepended by another backslash? But how about three, four and so on backslashes ended with a quote? It's getting messier and messier. If you can either know for sure that your data will never look like those border cases or can conciously decide that you don't care about those cases and can live with the fact that they will be extracted wrongly it's way easier. In your case that would be, for example a message containing comment=This is actually an example of a key=value pair. Another_field=Another value or groceries=Let's buy bread, strawberries, butter and pork loin. As you can see without additional assumptions about the data the extractions will be wrong.  
Hi @Tiong.Koh  I executed the command in CMD console in the server and the output was the attached file, the name don´t change after that, the command was the next: java -Djava.library.path="E:\A... See more...
Hi @Tiong.Koh  I executed the command in CMD console in the server and the output was the attached file, the name don´t change after that, the command was the next: java -Djava.library.path="E:\AppDynamics_DB" -Ddbagent.name="DBAgent01" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -jar E:\AppDynamics_DB\db-agent.jar I don't know where is the problem or if I did something wrong
I was able to get it to work with this final query: index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if('<<FIELD>>' > 0, '<<FIELD>>', n... See more...
I was able to get it to work with this final query: index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if('<<FIELD>>' > 0, '<<FIELD>>', null())] | filldown * Thank you so much for your answer.
Hello Did you manage to resolve, I am facing a similar problem with my intermediate forwarders crushing. I have enough memory 16GB
I have not seen value with equal sign. I am open for suggestions to pick a better delimiter. There are some values with space. Would this be a problem? I can certainly improve the structure. comma... See more...
I have not seen value with equal sign. I am open for suggestions to pick a better delimiter. There are some values with space. Would this be a problem? I can certainly improve the structure. comma-space is a multivalued field. There are two such fields. Of which, I would need to compute high frequency value at a later stage. I can modify multivalued field [list of data] to something like this  https://www.splunk.com/en_us/blog/tips-and-tricks/delimiter-base-kv-extraction-advanced.html?locale=en_us  FIELDS= "time", "client-ip", "cs-method", "sc-status" @PickleRick Let me know the if there are any changes to be make to log.message to make it "full-preg" so that Splunk can deliver          
I tried the exact same thing just with my url.  It did not pass any tokens.  Permissions have been double and tripe checked.  IS there a limit to how many tokens can be passed.  I am trying to pass t... See more...
I tried the exact same thing just with my url.  It did not pass any tokens.  Permissions have been double and tripe checked.  IS there a limit to how many tokens can be passed.  I am trying to pass the current dashboard a user may be on so they can share the link with all the tokens(filters) applied.
If you are doing clean installation and want to use old node information you must restore at least splunk/etc directory. But then there could be some conflict with buckets etc. For that reason I pref... See more...
If you are doing clean installation and want to use old node information you must restore at least splunk/etc directory. But then there could be some conflict with buckets etc. For that reason I prefer remove, clean up and create a new node which add to this cluster.  If you are using that migrate on place Centos2RHEL as @johnhuang told, then if you have possibility to do that as offline it’s probably the safest option? If you cannot do it as offline, then maybe you could try to put CM in maintenance mode, then update/migrate one node then sync cluster, put it again maintenance mode and continue with next node etc. This could work, but it’s best if you can test it with lab/test environment? I don’t take any responsibility of these instructions as I haven’t done this by myself! Hard to do any estimates for how long it takes as it’s depending for your hardware, disk speed network etc.
A lot of that query wasn't cleaned up from previous exploration queries, but thanks for the response.  It looks like your suggestion is almost working for me except that the statement errors on the c... See more...
A lot of that query wasn't cleaned up from previous exploration queries, but thanks for the response.  It looks like your suggestion is almost working for me except that the statement errors on the columns that are multi-worded | foreach * [eval <<FIELD>> = if(<<FIELD>> > 0, <<FIELD>>, null())]    
when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally,  My m... See more...
when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally,  My mistake again.  When using the same output name, the second lookup overrides the first.  Use outputnew in the second. index=A sourcetype="Any" | fields "IP address" Hostname OS | dedup "IP address" Hostname OS | eval Hostname = lower(Hostname) | lookup inventory.csv Reporting_Host as Hostname output Reporting_Host as match | lookup inventory.csv Reporting_Host as "IP address" OUTPUTNEW Reporting_Host as match | eval match = if(isnull(match), "missing", "ok") | table Hostname "IP address" OS match  
Can you explain the physical significance of "last valid count"?  If you fill those zero values with any given non-zero value, where do you set the boundary?  The very first zero following a previous... See more...
Can you explain the physical significance of "last valid count"?  If you fill those zero values with any given non-zero value, where do you set the boundary?  The very first zero following a previous non-zero? Why is that a valid representation of your data? If you mean to simply connect non-zero values with a line, just set those 0 to null. index="jsm_issues" | dedup _time key | timechart count by fields.status.name | foreach * [eval <<FIELD>> = if(<<FIELD>> > 0, <<FIELD>>, null())] (Two pointers: When using timechart, there is no need to sort _time.  Also I don't see a point of count(fields.status.name) when groupby is the field itself. Then, in Visualization -> Format, set Null values to connect Here is an emulation. index=_internal sourcetype=splunkd thread_name=* earliest=-1h@h latest=-0h@h-30m | timechart count by thread_name | foreach * [eval <<FIELD>> = if(<<FIELD>> > 100, sqrt(<<FIELD>>), 0)] ``` the above emulates index="jsm_issues" | dedup _time key | timechart count by fields.status.name ``` Without setting 0 to null: Set 0 to null without connecting dots Connect the dots  
Hi @yuanliu  Thank you for your feedback! I will definitely look into the performance issue and plan for further improvements. Regarding the query, I tried it out, and here’s how it’s working: In... See more...
Hi @yuanliu  Thank you for your feedback! I will definitely look into the performance issue and plan for further improvements. Regarding the query, I tried it out, and here’s how it’s working: Index A contains around 70k assets, which serves as our asset inventory. Some hosts in this index have multiple IP addresses assigned to them. Index B has just the hostname, but this can include a mix of IP addresses, FQDNs, and hostnames. When I ran the query, the first lookup compared the Reporting_Host with the hostnames in Index A and determined whether there was a match. The second lookup compared the Reporting_Host against the IP addresses in Index A to check for matches. However, when we combined these lookups as shown in the query you shared, the results only reflected matches from the second lookup, meaning only the IP addresses were being compared. Additionally, since a host in Index A has multiple IP addresses, the query gives a match for the IP address that corresponds, but for the remaining IP addresses associated with that host, it shows them as missing.
You can still use token in that where clause.  In fact, where in an inputlookup uses the same syntax as search term, unlike the where command that requires an eval expression.
I think my understanding fits your description.  The idea behind my suggested search is: Search between the two hours.  Find all records that has pv_number. (You can restrict pv_number to a given v... See more...
I think my understanding fits your description.  The idea behind my suggested search is: Search between the two hours.  Find all records that has pv_number. (You can restrict pv_number to a given value but my search assumes that you want to group by pv_number, which is stated in the OP.) Look for the earliest value of state, and the latest. Compare earliest value and latest value. Only print those where the two equal. Have you tried my search?  Also play with my emulation (that should run in any instance), and examine output with and without that where filter.  As my code indicates, I use thread_name to fake pv_number, date_minute to fake state.  They may have different values from your real data, but the principle is the same.
This worked for me!  I'm kind of surprised how close my psuedo search was to the right answer!    I did modify this a little to use `search` instead of `where` so that I could add a dashboard token... See more...
This worked for me!  I'm kind of surprised how close my psuedo search was to the right answer!    I did modify this a little to use `search` instead of `where` so that I could add a dashboard token to this query as well.  
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. ... See more...
Issue after upgrade from HF splunk 9.2.1 to 9.2.2. OS Running on Redhat 8.10 latest kernel version Tried to change give permissions to splunk folder. Tried to set to permissive mode the sestatus. [afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk start --accept-license --answer-yes Error calling execve(): Permission denied Error launching systemctl show command: Permission denied This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a.deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y Can't run "btool server list clustering --no-log": Permission denied [afmpcc-prabdev@sgmtihfsv001 splunk]$[afmpcc-prabdev@sgmtihfsv001 splunk]$ sudo -u splunk /mnt/splunk/splunk/bin/splunk btool server list clustering --no-log execve: Permission denied while running command /mnt/splunk/splunk/bin/btool [afmpcc-prabdev@sgmtihfsv001 splunk]$