All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

No. If you upload a file via "add data" screen, the events are getting indexed and are immutable. There is no such thing as "updating" the events. Also, why would you upload the same csv multiple ti... See more...
No. If you upload a file via "add data" screen, the events are getting indexed and are immutable. There is no such thing as "updating" the events. Also, why would you upload the same csv multiple times? Why would you even upload csv at all? In normal production environment you typically monitor log files or get events ingested in a different continuous way. Sometimes you upload samples of logs into dev/testing environments but that's a different case and there you usually don't mind the duplicates and/or you'd simply delete and recreate the index if duplication was an issue for you.
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it poss... See more...
I am uploading csv file format data into splunk. every time I make change to the data or add any info I will update the full csv file into splunk.  now I have duplicate event in splunk.  Is it possible to sort by only last upload csv file data show?   Thanks
Hi All,   We have various splunk UFs running on windows ,unix machines . We are planning to upgrade all the versions to latest universal forwarder . We have versions from : 6.5.0 to 8.2.7 . Our p... See more...
Hi All,   We have various splunk UFs running on windows ,unix machines . We are planning to upgrade all the versions to latest universal forwarder . We have versions from : 6.5.0 to 8.2.7 . Our plan to upgrade to 9.x     Can someone help intermediate versions to upgrade from 6.5.0 to 9.X   
This worked fine for me to get to seconds, then I just did /60/60 to get to hours which is what I wanted to sum up.
In my splunk query I apply dedup on "mail sub".  as you can see unique but very similar subject remains in table which I want to further become joined or considered as 1 row. I have a slightly d... See more...
In my splunk query I apply dedup on "mail sub".  as you can see unique but very similar subject remains in table which I want to further become joined or considered as 1 row. I have a slightly different reading of the OP's intention based on these sentences.  Do you mean you want to group by mail subject's similarity, such as "account created for *"?  If so, you must realize that "similar" is a highly subjective word.  Unless you spell out precise criteria to determine similarity, you must look for natural language processing tool rather than Splunk search. Suppose my reading of your intention is correct, and that "account created for" is one such criterion for "similarity", your illustrated single-row output is still wrong.  Do you mean something like mail from mail sub mail to count ABC account created for *A, B, C* abc@a.com bcd@a.com efg@a.com 3 Not only that.  You also mentioned dedup mail sub alone.  That is quite counterproductive to accurate counting because you are asking for "count ... on the basis of partial match in unique subject and mail from combined."  At the very minimum, you must dedup on mail from and mail sub; you SHOULD probably also add mail to in that list for the count to make sense.  But I'll leave those decisions to you. Now, to use "account created for *" as partial match.  There are many ways to do that.  Here is one   | rex field="mail sub" "(?<similarity>account created for)\s+(?<disimilarity>.+)" | stats values(disimilarity) as disimilarity values("mail to") as "mail to" by "mail from" similarity | eval similarity = similarity . " *" . mvjoin(disimilarity, ", ") . "*" | fields - disimilarity   This will give you mail from similarity mail to ABC account created for *A, B, C* abc@a.com bcd@a.com efg@a.com Hope this helps.  Here is an emulation that you can play with and compare with real data   | makeresults format=csv data="mail from, mail sub, mail to ABC, account created for A, abc@a.com ABC, account created for B, bcd@a.com ABC, account created for C, efg@a.com" ``` data emulation above ```  
I have a query  and i need to show the logs as shown in the below image. Total Messages:  index=app-logs " Request received from all applications" |stats count Error count: Sum of count (App lo... See more...
I have a query  and i need to show the logs as shown in the below image. Total Messages:  index=app-logs " Request received from all applications" |stats count Error count: Sum of count (App logs + Exception logs + Canceled logs + 401 mess logs) App logs:  index=app-logs "Application logs received" Exception logs:  index=app-logs "Exception logs received" Canceled logs:  index=app-logs "unpassed logs received" 401 mess logs:  index=app-logs "401 error message" Stand by count: Subtract(url - cleared log) url:   index=app-logs "url info staged" cleared log: index=app-logs "Filtered logs arranged"  
The mock data is helpful. (Note the two entries have no difference except timestamp.)  But always verbalize your thought process of how you derive changed/stopped from this data.  Asking volunteers t... See more...
The mock data is helpful. (Note the two entries have no difference except timestamp.)  But always verbalize your thought process of how you derive changed/stopped from this data.  Asking volunteers to reverse engineer (aka read mind) complex code discourage people from offering help.  Because there are always more wrong speculations than correct one, mind reading is usually a waste of time. If I must try, I see that you are trying to determine "not running" state from isnull('CPU %') AND isnull('MEM %').  I do not think this is possible because if a process is not running, the command will not be in any event.  Your verbal descriptions give me the vague sense that you don't really expect Windows to give you an explicit event about something not running.  Instead, you are expecting to detect a period of "stoppage" between a previously running process and a latter running process (with a different PID).  Is this correct?  In that case, using latest function on everything will not achieve that. Meanwhile, if all you want to see is whether a specific command (such as cybAgent.bin) is running in the latest period during which any Windows events is available, you CAN use other events as reference point.  But you will have to give up the filter COMMAND=*cybAgent* so other events can come through.  For example, if you know a specific command (I call it a "heartbeat")  that always runs, you can have a filter like (COMMAND IN (*cybAgent*, <heartbeat>), then use the heartbeat event to infer a process' "not running."  Is this your use case? (Theoretically you can pour all process events through and use all of them as heartbeat.  Is that viable?) Alternatively, if you don't have/want a heartbeat event(s), but you know for certain that process events always come in at predetermined time intervals (e.g., every minute), you can use the interval as reference to infer whether a command is running.  Is this the case? In addition, you did not describe your desired output.  The sample code suggests that in addition to status (including indication of stoppage), you also want some metric on CPU and memory.  If your use case is the former, i.e., detect stoppage by detecting PID changes, you will need a stats function to calculate that metric.  Is that avg?
That's excellent news. You can always utilize the "-v" verbose option in the Curl command to access additional details about errors and other related information, which can be invaluable for troubles... See more...
That's excellent news. You can always utilize the "-v" verbose option in the Curl command to access additional details about errors and other related information, which can be invaluable for troubleshooting issues. As we discussed during the Webex call, the root cause of your API issues was the expired license. It's reassuring to hear that this matter has now been resolved.
The issue was related to license not updated in the account. It is working now after refreshing the license and account details in the admin page.
I just updated the Splunk App for Lookup File Editing to the latest and now I can no longer download lookup files via CLI.  This has been working flawlessly in Splunk Cloud when I was running v3.6.0 ... See more...
I just updated the Splunk App for Lookup File Editing to the latest and now I can no longer download lookup files via CLI.  This has been working flawlessly in Splunk Cloud when I was running v3.6.0 but just updated to 4.0.1 (v4.0.2 not available in Cloud yet) and now I am getting 403 errors. Through testing, I verified lookup endpoint is still valid, lookup shared at global level, and I even changed the permissions of the account to be sc_admin but still experiencing the same issue.  Has anyone else come across this and found a solution?  Same error no matter which lookup file I attempt to download. My test command   python3 lut.py -app search -l geo_attr_countries.csv -app search INFO:root:list of lookups to download: ['geo_attr_countries.csv'] ERROR:root:[failed] Error: Downloading file: 'geo_attr_countries.csv', status:403, reason:Forbidden, url:https://[REDACTED].splunkcloud.com:8089/services/data/lookup_edit/lookup_contents?lookup_type=csv&namespace=search&lookup_file=geo_attr_countries.csv    Python script from here
look for the powershell logs "-LitigationHoldEnabled"
You can create a Custom Alert Action that is backed by your python script: Using custom alert actions - Splunk Documentation And here's the developer details on how you need to set things up: http... See more...
You can create a Custom Alert Action that is backed by your python script: Using custom alert actions - Splunk Documentation And here's the developer details on how you need to set things up: https://dev.splunk.com/enterprise/docs/devtools/customalertactions/  
Where did you come across this feature?  As I recall, Splunk removed the ability to run a script as an alert action years ago.
Hello @Aman.Kulsange, I found this info...it may not completely apply to your specific situation, but I hope it provides some insight. You will need to install ncurses-compat-libs. sudo yum ins... See more...
Hello @Aman.Kulsange, I found this info...it may not completely apply to your specific situation, but I hope it provides some insight. You will need to install ncurses-compat-libs. sudo yum install -y ncurses-compat-libs   Or you can install libncurses6 and create symlinks. Install libncurses6 sudo yum install libncurses6 For libncurses6 you need to create symlink for libncurses5 pointing to libncurses6. Create symlinks as shown in the link below https://docs.appdynamics.com/appd/onprem/latest/en/planning-your-deployment/physical-machine-controller-deployment-guide/prepare-the-controller-host/prepare-linux-for-the-controller   Please see below for the required libraries  https://docs.appdynamics.com/appd/onprem/latest/en/enterprise-console/enterprise-console-requirements#id-.EnterpriseConsoleRequirementsv23.9-required-librariesRequiredLibraries
Hello @Yann.Buccellato, Please check out these AppD Docs pages: https://docs.appdynamics.com/sap/en/monitoring-integration/sap-dashboards/sap-general-dashboards/idoc-monitoring https://docs.a... See more...
Hello @Yann.Buccellato, Please check out these AppD Docs pages: https://docs.appdynamics.com/sap/en/monitoring-integration/sap-dashboards/sap-general-dashboards/idoc-monitoring https://docs.appdynamics.com/sap/en/monitoring-integration/set-up-monitoring-integration/monitoring-connector-mapping/mapping-between-kpis-and-metrics For the longer doc, it helps to search for "processing" so it finds the section on processing time
Hi @gtmj - Unfortunately it's still in development so not available quite yet. We appreciate your patience! 
Can you work with Support to get the older version? Also, what type of Splunk instance are you doing this on?  Is it a UF, HF, Search Head, Indexer, etc?  I think that might help you approach this. ... See more...
Can you work with Support to get the older version? Also, what type of Splunk instance are you doing this on?  Is it a UF, HF, Search Head, Indexer, etc?  I think that might help you approach this.  Based on the docs  it sounds like losing some index configurations are part of the breaking changes.  For example, if this was an Indexer you're upgrading and relying on the indexes.conf in the Windows app to define that index, then you'll need to move those configurations into another indexes.conf within your deployment.  A similar situation exists for configurations included within authorize.conf for that older version.   BUT, if this is just a UF, then some of this might be a moot point because UF's don't care about the indexes.conf configurations.  You would probably have less concerns about doing this on a UF versus a Splunk instance that is part of the core infrastructure versus an edge agent.  
To determine if a given field value is in a lookup file, use the lookup command.   | eval email_domain = mvindex(split(TargetUserOrGroupName, "@"),1) | lookup free_email_domains.csv.csv email_domai... See more...
To determine if a given field value is in a lookup file, use the lookup command.   | eval email_domain = mvindex(split(TargetUserOrGroupName, "@"),1) | lookup free_email_domains.csv.csv email_domain OUTPUT is_free_domain ``` If email_domain is not in the lookup file then is_free_domain will be null ``` | where isnotnull(is_free_domain)  
Using just the where command to filter results just removes one Server1 event rather than all of them. Instead, you can use the eventstats command to associated the Deleted status with all events fr... See more...
Using just the where command to filter results just removes one Server1 event rather than all of them. Instead, you can use the eventstats command to associated the Deleted status with all events from the same server.  Then filter on that association. | eventstats count(eval(Status="Deleted")) as is_deleted by Name | where is_deleted=1 | fields - is_deleted  
As @richgalloway said, there is no “default” ports, just examples. You could choose what ever you want.