All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts ... See more...
I'm working on a column chart visualization that show income ranges: "$24,999 and under" "$25,000  - $99,999" "$100,000 and up" The problem is that when the column chart orders them, it puts "$100,000 and up" first instead of last.  I've created an eval that assigns a sort_order value based on the field value that orders them correctly.  However, I can't figure out how to get the column chart to sort according to that field.  This is what I'm currently trying:   | eval sort_order=case(income=="$24,000 and under",1,income=="$25,000 - $39,999",2,income=="$40,000 - $79,999",3,income=="$80,000 - $119,999",4,income=="$120,000 - $199,999",5,income=="$200,000 or more",6) | sort sort_order | chart count by income   Here's the visualization: Is there some other way to accomplish this?  
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any sugg... See more...
Hello All, I have a lookup file which stores a set of SPLs and it periodically gets refreshed. How to build a search query such that it iteratively executes each SPL from the lookup file? Any suggestions and ideas will be very helpful. Thank you Taruchit
Ok. If you want to find the moment in which the PID changed, you have to carry it over to the next event (otherwise Splunk doesn't have any notion of any relationship between separate events) using t... See more...
Ok. If you want to find the moment in which the PID changed, you have to carry it over to the next event (otherwise Splunk doesn't have any notion of any relationship between separate events) using the "autoregress" command or - in a more universal manner -using streamstats | streamstats current=f last(PID) as lastPID by COMMAND This way you can see when lastPID for a given command is different than PID (mind you, Splunk by default sorts in reverse chronological order so this way you'll find the latest event before the restart; you can tweak this solution with sorting to find the first one after the restart). As a side note, don't use wildcards at the betinning of the search term unless you absolutely must.
And what have you tried so far and what is the problem with your result? To make things clear - in Splunk there is no "merging" of cells. Maybe there is a visualization which silently renders a tabl... See more...
And what have you tried so far and what is the problem with your result? To make things clear - in Splunk there is no "merging" of cells. Maybe there is a visualization which silently renders a table this way but I know of no such thing. Generally, a table has a "full grid" of results. Do you have problems with combining your searches into a single one or do you have the search but can't visualize it?
Forwarders are not as sensitive to particular upgrade path as "full" Splunk Enterprise instances are. I remember upgrading all the way straight from 7.2 to 9.0. I'm not sure about as far back as 6.5... See more...
Forwarders are not as sensitive to particular upgrade path as "full" Splunk Enterprise instances are. I remember upgrading all the way straight from 7.2 to 9.0. I'm not sure about as far back as 6.5 but with a reasonable backup of configuration I wouldn't expect many problems.
No. If you upload a file via "add data" screen, the events are getting indexed and are immutable. There is no such thing as "updating" the events. Also, why would you upload the same csv multiple ti... See more...
No. If you upload a file via "add data" screen, the events are getting indexed and are immutable. There is no such thing as "updating" the events. Also, why would you upload the same csv multiple times? Why would you even upload csv at all? In normal production environment you typically monitor log files or get events ingested in a different continuous way. Sometimes you upload samples of logs into dev/testing environments but that's a different case and there you usually don't mind the duplicates and/or you'd simply delete and recreate the index if duplication was an issue for you.
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it poss... See more...
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it possible to sort by only last upload csv file data show?   Thanks
Hi All,   We have various splunk UFs running on windows ,unix machines . We are planning to upgrade all the versions to latest universal forwarder . We have versions from : 6.5.0 to 8.2.7 . Our p... See more...
Hi All,   We have various splunk UFs running on windows ,unix machines . We are planning to upgrade all the versions to latest universal forwarder . We have versions from : 6.5.0 to 8.2.7 . Our plan to upgrade to 9.x     Can someone help intermediate versions to upgrade from 6.5.0 to 9.X   
This worked fine for me to get to seconds, then I just did /60/60 to get to hours which is what I wanted to sum up.
In my splunk query I apply dedup on "mail sub".  as you can see unique but very similar subject remains in table which I want to further become joined or considered as 1 row. I have a slightly d... See more...
In my splunk query I apply dedup on "mail sub".  as you can see unique but very similar subject remains in table which I want to further become joined or considered as 1 row. I have a slightly different reading of the OP's intention based on these sentences.  Do you mean you want to group by mail subject's similarity, such as "account created for *"?  If so, you must realize that "similar" is a highly subjective word.  Unless you spell out precise criteria to determine similarity, you must look for natural language processing tool rather than Splunk search. Suppose my reading of your intention is correct, and that "account created for" is one such criterion for "similarity", your illustrated single-row output is still wrong.  Do you mean something like mail from mail sub mail to count ABC account created for *A, B, C* abc@a.com bcd@a.com efg@a.com 3 Not only that.  You also mentioned dedup mail sub alone.  That is quite counterproductive to accurate counting because you are asking for "count ... on the basis of partial match in unique subject and mail from combined."  At the very minimum, you must dedup on mail from and mail sub; you SHOULD probably also add mail to in that list for the count to make sense.  But I'll leave those decisions to you. Now, to use "account created for *" as partial match.  There are many ways to do that.  Here is one   | rex field="mail sub" "(?<similarity>account created for)\s+(?<disimilarity>.+)" | stats values(disimilarity) as disimilarity values("mail to") as "mail to" by "mail from" similarity | eval similarity = similarity . " *" . mvjoin(disimilarity, ", ") . "*" | fields - disimilarity   This will give you mail from similarity mail to ABC account created for *A, B, C* abc@a.com bcd@a.com efg@a.com Hope this helps.  Here is an emulation that you can play with and compare with real data   | makeresults format=csv data="mail from, mail sub, mail to ABC, account created for A, abc@a.com ABC, account created for B, bcd@a.com ABC, account created for C, efg@a.com" ``` data emulation above ```  
I have a query  and i need to show the logs as shown in the below image. Total Messages:  index=app-logs " Request received from all applications" |stats count Error count: Sum of count (App lo... See more...
I have a query  and i need to show the logs as shown in the below image. Total Messages:  index=app-logs " Request received from all applications" |stats count Error count: Sum of count (App logs + Exception logs + Canceled logs + 401 mess logs) App logs:  index=app-logs "Application logs received" Exception logs:  index=app-logs "Exception logs received" Canceled logs:  index=app-logs "unpassed logs received" 401 mess logs:  index=app-logs "401 error message" Stand by count: Subtract(url - cleared log) url:   index=app-logs "url info staged" cleared log: index=app-logs "Filtered logs arranged"  
The mock data is helpful. (Note the two entries have no difference except timestamp.)  But always verbalize your thought process of how you derive changed/stopped from this data.  Asking volunteers t... See more...
The mock data is helpful. (Note the two entries have no difference except timestamp.)  But always verbalize your thought process of how you derive changed/stopped from this data.  Asking volunteers to reverse engineer (aka read mind) complex code discourage people from offering help.  Because there are always more wrong speculations than correct one, mind reading is usually a waste of time. If I must try, I see that you are trying to determine "not running" state from isnull('CPU %') AND isnull('MEM %').  I do not think this is possible because if a process is not running, the command will not be in any event.  Your verbal descriptions give me the vague sense that you don't really expect Windows to give you an explicit event about something not running.  Instead, you are expecting to detect a period of "stoppage" between a previously running process and a latter running process (with a different PID).  Is this correct?  In that case, using latest function on everything will not achieve that. Meanwhile, if all you want to see is whether a specific command (such as cybAgent.bin) is running in the latest period during which any Windows events is available, you CAN use other events as reference point.  But you will have to give up the filter COMMAND=*cybAgent* so other events can come through.  For example, if you know a specific command (I call it a "heartbeat")  that always runs, you can have a filter like (COMMAND IN (*cybAgent*, <heartbeat>), then use the heartbeat event to infer a process' "not running."  Is this your use case? (Theoretically you can pour all process events through and use all of them as heartbeat.  Is that viable?) Alternatively, if you don't have/want a heartbeat event(s), but you know for certain that process events always come in at predetermined time intervals (e.g., every minute), you can use the interval as reference to infer whether a command is running.  Is this the case? In addition, you did not describe your desired output.  The sample code suggests that in addition to status (including indication of stoppage), you also want some metric on CPU and memory.  If your use case is the former, i.e., detect stoppage by detecting PID changes, you will need a stats function to calculate that metric.  Is that avg?
That's excellent news. You can always utilize the "-v" verbose option in the Curl command to access additional details about errors and other related information, which can be invaluable for troubles... See more...
That's excellent news. You can always utilize the "-v" verbose option in the Curl command to access additional details about errors and other related information, which can be invaluable for troubleshooting issues. As we discussed during the Webex call, the root cause of your API issues was the expired license. It's reassuring to hear that this matter has now been resolved.
The issue was related to license not updated in the account. It is working now after refreshing the license and account details in the admin page.
I just updated the Splunk App for Lookup File Editing to the latest and now I can no longer download lookup files via CLI.  This has been working flawlessly in Splunk Cloud when I was running v3.6.0 ... See more...
I just updated the Splunk App for Lookup File Editing to the latest and now I can no longer download lookup files via CLI.  This has been working flawlessly in Splunk Cloud when I was running v3.6.0 but just updated to 4.0.1 (v4.0.2 not available in Cloud yet) and now I am getting 403 errors. Through testing, I verified lookup endpoint is still valid, lookup shared at global level, and I even changed the permissions of the account to be sc_admin but still experiencing the same issue.  Has anyone else come across this and found a solution?  Same error no matter which lookup file I attempt to download. My test command   python3 lut.py -app search -l geo_attr_countries.csv -app search INFO:root:list of lookups to download: ['geo_attr_countries.csv'] ERROR:root:[failed] Error: Downloading file: 'geo_attr_countries.csv', status:403, reason:Forbidden, url:https://[REDACTED].splunkcloud.com:8089/services/data/lookup_edit/lookup_contents?lookup_type=csv&namespace=search&lookup_file=geo_attr_countries.csv    Python script from here
look for the powershell logs "-LitigationHoldEnabled"
You can create a Custom Alert Action that is backed by your python script: Using custom alert actions - Splunk Documentation And here's the developer details on how you need to set things up: http... See more...
You can create a Custom Alert Action that is backed by your python script: Using custom alert actions - Splunk Documentation And here's the developer details on how you need to set things up: https://dev.splunk.com/enterprise/docs/devtools/customalertactions/  
Where did you come across this feature?  As I recall, Splunk removed the ability to run a script as an alert action years ago.
Hello @Aman.Kulsange, I found this info...it may not completely apply to your specific situation, but I hope it provides some insight. You will need to install ncurses-compat-libs. sudo yum ins... See more...
Hello @Aman.Kulsange, I found this info...it may not completely apply to your specific situation, but I hope it provides some insight. You will need to install ncurses-compat-libs. sudo yum install -y ncurses-compat-libs   Or you can install libncurses6 and create symlinks. Install libncurses6 sudo yum install libncurses6 For libncurses6 you need to create symlink for libncurses5 pointing to libncurses6. Create symlinks as shown in the link below https://docs.appdynamics.com/appd/onprem/latest/en/planning-your-deployment/physical-machine-controller-deployment-guide/prepare-the-controller-host/prepare-linux-for-the-controller   Please see below for the required libraries  https://docs.appdynamics.com/appd/onprem/latest/en/enterprise-console/enterprise-console-requirements#id-.EnterpriseConsoleRequirementsv23.9-required-librariesRequiredLibraries
Hello @Yann.Buccellato, Please check out these AppD Docs pages: https://docs.appdynamics.com/sap/en/monitoring-integration/sap-dashboards/sap-general-dashboards/idoc-monitoring https://docs.a... See more...
Hello @Yann.Buccellato, Please check out these AppD Docs pages: https://docs.appdynamics.com/sap/en/monitoring-integration/sap-dashboards/sap-general-dashboards/idoc-monitoring https://docs.appdynamics.com/sap/en/monitoring-integration/set-up-monitoring-integration/monitoring-connector-mapping/mapping-between-kpis-and-metrics For the longer doc, it helps to search for "processing" so it finds the section on processing time