All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

1. I don't understand how removing unused apps would save you money (apart from premium apps like ES or ITSI, but here you should know already if your company is using them). There is a small number ... See more...
1. I don't understand how removing unused apps would save you money (apart from premium apps like ES or ITSI, but here you should know already if your company is using them). There is a small number of paid apps for Splunk but they are - if I remember correctly - typically associated with particular input types so you need them if you ingest data of given kind. 2. An app upgrade typically (if the app is properly written) just boils down to deploying a new version of an app. The new app version should contain new default files but should leave your local settings untouched. But. 3. There might be some apps which introduce some incompatibilities across versions and need additional things to be done after upgrade. To find out what those are you need to check the app's docs. There is no other way. 4. The method of deploying new versions of your apps will greatly depend on your environment architecture - whether you use clustering on your search-heads and indexers, if you push apps from deployment server, what push mode do you use on your SHC deployer (if applicable). This might be as easy as going to the apps menu and clicking "upgrade" a few times, but can be much more complicated. EDIT: Sorry, I didn't notice you posted this in the Cloud section. In this case the "main" part should be easier (but if there are apps manually added - not from splunkbase), you'd need to go through vetting new versions again. But you'll need to upgrade apps on DS/HFs/UFs as well which _probably_ (unless you manage everything by hand) means uploading new versions to the DS and letting the clients download and deploy it.
thanks for the reply but  I want the total count when the timeval is latest. (in this case 2023), so according to my lookup result should be 2. with BIE count is 0 and  RAD count is 2 so 0+2=2. Hope ... See more...
thanks for the reply but  I want the total count when the timeval is latest. (in this case 2023), so according to my lookup result should be 2. with BIE count is 0 and  RAD count is 2 so 0+2=2. Hope this helps in understanding
If you want to just get some statistical report on data read from your lookup, use the inputlookup command. Like | inputlookup mylookup | stats count will give you number of rows in your lookup. Y... See more...
If you want to just get some statistical report on data read from your lookup, use the inputlookup command. Like | inputlookup mylookup | stats count will give you number of rows in your lookup. You can do any operation on fields read from the lookup that you would normally do in a "normal" event search.
I have this lookup I want the total count when the timeval is latest. (in this case 2023) any solution
I am tasked to do the application upgrades on splunk & also to find out the applications which are not being used much so we can uninstall them & save some cost around it.  Can someone help me with ... See more...
I am tasked to do the application upgrades on splunk & also to find out the applications which are not being used much so we can uninstall them & save some cost around it.  Can someone help me with the desired steps to upgrade the applications in splunk across regions & also how can i list down the apps which are not being used.
I have a use case where I want to setup Splunk Alerts for certain Exception events. I have already defined standard Error messages for these individual Exceptions. Below is a sample use case: Except... See more...
I have a use case where I want to setup Splunk Alerts for certain Exception events. I have already defined standard Error messages for these individual Exceptions. Below is a sample use case: Exception Event 1:                                  Exception Event 2: Standard Error Message 1                  Common Message Common Message In the above use case, when Exception Event 1 happens, it outputs 2 messages to the Log (Standard Error Message 1 and Common Message). When Exception Event 2 happens, it only outputs the Common Message to the log. For defining Splunk Alert for the Event 1, I want to ensure that I am checking the 2 counts of search results matching both the Message 1 and Common Message to ensure that both these searches return the same results count for a given time period. Is it possible to achieve this type of Splunk query using eval and If statement? My objective is to ensure that I am able to accurately identify scenario for the Exception Event 1 occurring where both the messages would be output to the logs in the same count.        
Hi @Jugabanhi, you have to assign the Time Ranges to the role of these users in [Settings > User Interfaces > Time Ranges]. Ciao. Giuseppe
Actually, SplunkWeb crashes in the "preview" stage of creating a new input so I can't even create the input that way. That is why I'm using the Python SDK (which is basically the REST API) and I v... See more...
Actually, SplunkWeb crashes in the "preview" stage of creating a new input so I can't even create the input that way. That is why I'm using the Python SDK (which is basically the REST API) and I very much see that error message in the debug log so it's not a SplunkWeb issue at all.
Want to hide time picker options like real-time, presets for specific some roles, and admin should see all of them. I am able to hide for all users only with css, but I need to hide for specific use... See more...
Want to hide time picker options like real-time, presets for specific some roles, and admin should see all of them. I am able to hide for all users only with css, but I need to hide for specific user roles.  Thanks in advance.
Hello All, I have created an Scheduled Alert which is tend to run once in every day and alert has a splunk query with sendemail command. I set an alert to send a link to view results and alert de... See more...
Hello All, I have created an Scheduled Alert which is tend to run once in every day and alert has a splunk query with sendemail command. I set an alert to send a link to view results and alert details but when the alert is triggered i am receiving an email but only the results that returns from the search but i don't see the link to results even though i configured while setting up the alert. Can someone assist me on this?
Hi @bowesmana , the lookup (outside the Data Model) correctly runs, for this reason I opened the question in Community, because it seems that there's an issue in the lookup usage in the Data Model. ... See more...
Hi @bowesmana , the lookup (outside the Data Model) correctly runs, for this reason I opened the question in Community, because it seems that there's an issue in the lookup usage in the Data Model. Ciao. Giuseppe
Hello Team, I have got few queries regarding Logs Monitoring in AppDynamics. 1.Where are logs stored in AppDynamics SaaS controller when enabled through Log Analytics? 2.How is the storage managem... See more...
Hello Team, I have got few queries regarding Logs Monitoring in AppDynamics. 1.Where are logs stored in AppDynamics SaaS controller when enabled through Log Analytics? 2.How is the storage management done for logs? 3.Also what is the retention period for the logs and can it be modified? Thanks
Thank you very much @dtburrows3!! I can see the results for each application but looks like the map does not work for me. I also tried to use just sendemail command but it does not work either. Whe... See more...
Thank you very much @dtburrows3!! I can see the results for each application but looks like the map does not work for me. I also tried to use just sendemail command but it does not work either. When i give the email id manually i can see an email getting triggered but not when i use the field name which has email id's. Can you provide suggestion on this? Thanks in advance!
Indeed, as it is in @bitnapper's original question. nullif, match, etc. could be added for input validation.
spath works nicely, but the one liner only works if ALL the data is made up of codepoints only
Hi @inventsekar , Here is my SPL for Missle Map, | tstats `security_content_summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where (NOT IDS_Attacks.src IN(192.168.0.0/16, 172... See more...
Hi @inventsekar , Here is my SPL for Missle Map, | tstats `security_content_summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where (NOT IDS_Attacks.src IN(192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8, 8.8.8.8, 0.0.0.0, 1.1.1.1, 0:0:0:0:0:0:0:1,"unknown",34.87.171.169 )) NOT IDS_Attacks.severity IN (low, informational) by IDS_Attacks.src | rename IDS_Attacks.* as * | eval animate="true" | iplocation dest prefix=end_ | iplocation src prefix=start_ | eval end_lat=if(isnull(end_lat),21.007647, end_lat) | eval end_lon=if(isnull(end_lon),105.807235, end_lon) | eval color = case(count <= 100, "#8fce00", count > 100 AND count <= 300, "#ed8821", 1=1, "#f44336") | eval end_City="Hanoi", end_Country="Vietnam", end_Region="Hanoi" | sort -count | dedup start_Country | table animate color count dest end_City end_Country end_Region end_lat end_lon src start_City start_Country start_Region start_lat start_lon
Here's a one-liner that handles both decimal and hexadecimal code points: | eval name=mvjoin(mvmap(split(name, ";"), printf("%c", if(match(name, "^&#x"), tonumber(replace(name, "&#x", ""), 16), tonu... See more...
Here's a one-liner that handles both decimal and hexadecimal code points: | eval name=mvjoin(mvmap(split(name, ";"), printf("%c", if(match(name, "^&#x"), tonumber(replace(name, "&#x", ""), 16), tonumber(replace(name, "&#", ""), 10)))), "") You can also pad the value with XML tags and use the spath command: | eval name="<name>".name."</name>" | spath input=name path=name or the xpath command: | eval name="<name>".name."</name>" | xpath outfield=name "/name" field=name However, avoid the xpath command in this case. It's an external search command and requires creating a separate Python process to invoke $SPLUNK_HOME/etc/apps/search/bin/xpath.py.
Hi @tmaoz, The errors are reported when onboarding data through Splunkweb; however, if you're using INDEXED_EXTRACTIONS = csv, for example, the fields should be present in the index itself. You can... See more...
Hi @tmaoz, The errors are reported when onboarding data through Splunkweb; however, if you're using INDEXED_EXTRACTIONS = csv, for example, the fields should be present in the index itself. You can verify this with the walklex command after indexing your CSV file: | walklex index=xxx type=field | table field You may need to increase the [kv] indexed_kv_limit settings in limits.conf or set it to 0 to disable the limit.
What if you do the manual lookup with the lookup definition, not the raw CSV - as that's what the DM is doing. | lookup LOOKUP_DEFINITION sourcetype OUTPUT Application
Here's an example that extracts all the &#nnnn; sequences to a multivalue char field, which is then converted to the chars MV. It seems like the mvdedup can be used to remove duplicates in each case... See more...
Here's an example that extracts all the &#nnnn; sequences to a multivalue char field, which is then converted to the chars MV. It seems like the mvdedup can be used to remove duplicates in each case and the ordering appears to be preserved between the to MVs, so then the final foreach will replace that inside name | makeresults format=csv data="id,name 1,&#1040;&#1083;&#1077;&#1082;&#1089;&#1077;&#1081;" ``` Extract all sequences ``` | rex field=name max_match=0 "\&#(?<char>\d{4});" ``` Create the char array ``` | eval chars=mvmap(char, printf("%c", char)) ``` Remove duplicates from each MV - assumes ordering is preserved ``` | eval char=mvdedup(char), chars=mvdedup(chars) ``` Now replace each item ``` | eval c=0 | foreach chars mode=multivalue [ eval name=replace(name, "\&#".mvindex(char, c).";", <<ITEM>>), c=c+1 ] You could make this a macro and pass a string to the macro and the macro could do the conversion. Note that fixing the ingest is always the best option, but this can deal with any existing data. Assumes you're running Splunk 9