All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Jugabanhi, you have to assign the Time Ranges to the role of these users in [Settings > User Interfaces > Time Ranges]. Ciao. Giuseppe
Actually, SplunkWeb crashes in the "preview" stage of creating a new input so I can't even create the input that way. That is why I'm using the Python SDK (which is basically the REST API) and I v... See more...
Actually, SplunkWeb crashes in the "preview" stage of creating a new input so I can't even create the input that way. That is why I'm using the Python SDK (which is basically the REST API) and I very much see that error message in the debug log so it's not a SplunkWeb issue at all.
Want to hide time picker options like real-time, presets for specific some roles, and admin should see all of them. I am able to hide for all users only with css, but I need to hide for specific use... See more...
Want to hide time picker options like real-time, presets for specific some roles, and admin should see all of them. I am able to hide for all users only with css, but I need to hide for specific user roles.  Thanks in advance.
Hello All, I have created an Scheduled Alert which is tend to run once in every day and alert has a splunk query with sendemail command. I set an alert to send a link to view results and alert de... See more...
Hello All, I have created an Scheduled Alert which is tend to run once in every day and alert has a splunk query with sendemail command. I set an alert to send a link to view results and alert details but when the alert is triggered i am receiving an email but only the results that returns from the search but i don't see the link to results even though i configured while setting up the alert. Can someone assist me on this?
Hi @bowesmana , the lookup (outside the Data Model) correctly runs, for this reason I opened the question in Community, because it seems that there's an issue in the lookup usage in the Data Model. ... See more...
Hi @bowesmana , the lookup (outside the Data Model) correctly runs, for this reason I opened the question in Community, because it seems that there's an issue in the lookup usage in the Data Model. Ciao. Giuseppe
Hello Team, I have got few queries regarding Logs Monitoring in AppDynamics. 1.Where are logs stored in AppDynamics SaaS controller when enabled through Log Analytics? 2.How is the storage managem... See more...
Hello Team, I have got few queries regarding Logs Monitoring in AppDynamics. 1.Where are logs stored in AppDynamics SaaS controller when enabled through Log Analytics? 2.How is the storage management done for logs? 3.Also what is the retention period for the logs and can it be modified? Thanks
Thank you very much @dtburrows3!! I can see the results for each application but looks like the map does not work for me. I also tried to use just sendemail command but it does not work either. Whe... See more...
Thank you very much @dtburrows3!! I can see the results for each application but looks like the map does not work for me. I also tried to use just sendemail command but it does not work either. When i give the email id manually i can see an email getting triggered but not when i use the field name which has email id's. Can you provide suggestion on this? Thanks in advance!
Indeed, as it is in @bitnapper's original question. nullif, match, etc. could be added for input validation.
spath works nicely, but the one liner only works if ALL the data is made up of codepoints only
Hi @inventsekar , Here is my SPL for Missle Map, | tstats `security_content_summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where (NOT IDS_Attacks.src IN(192.168.0.0/16, 172... See more...
Hi @inventsekar , Here is my SPL for Missle Map, | tstats `security_content_summariesonly` count from datamodel=Intrusion_Detection.IDS_Attacks where (NOT IDS_Attacks.src IN(192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8, 8.8.8.8, 0.0.0.0, 1.1.1.1, 0:0:0:0:0:0:0:1,"unknown",34.87.171.169 )) NOT IDS_Attacks.severity IN (low, informational) by IDS_Attacks.src | rename IDS_Attacks.* as * | eval animate="true" | iplocation dest prefix=end_ | iplocation src prefix=start_ | eval end_lat=if(isnull(end_lat),21.007647, end_lat) | eval end_lon=if(isnull(end_lon),105.807235, end_lon) | eval color = case(count <= 100, "#8fce00", count > 100 AND count <= 300, "#ed8821", 1=1, "#f44336") | eval end_City="Hanoi", end_Country="Vietnam", end_Region="Hanoi" | sort -count | dedup start_Country | table animate color count dest end_City end_Country end_Region end_lat end_lon src start_City start_Country start_Region start_lat start_lon
Here's a one-liner that handles both decimal and hexadecimal code points: | eval name=mvjoin(mvmap(split(name, ";"), printf("%c", if(match(name, "^&#x"), tonumber(replace(name, "&#x", ""), 16), tonu... See more...
Here's a one-liner that handles both decimal and hexadecimal code points: | eval name=mvjoin(mvmap(split(name, ";"), printf("%c", if(match(name, "^&#x"), tonumber(replace(name, "&#x", ""), 16), tonumber(replace(name, "&#", ""), 10)))), "") You can also pad the value with XML tags and use the spath command: | eval name="<name>".name."</name>" | spath input=name path=name or the xpath command: | eval name="<name>".name."</name>" | xpath outfield=name "/name" field=name However, avoid the xpath command in this case. It's an external search command and requires creating a separate Python process to invoke $SPLUNK_HOME/etc/apps/search/bin/xpath.py.
Hi @tmaoz, The errors are reported when onboarding data through Splunkweb; however, if you're using INDEXED_EXTRACTIONS = csv, for example, the fields should be present in the index itself. You can... See more...
Hi @tmaoz, The errors are reported when onboarding data through Splunkweb; however, if you're using INDEXED_EXTRACTIONS = csv, for example, the fields should be present in the index itself. You can verify this with the walklex command after indexing your CSV file: | walklex index=xxx type=field | table field You may need to increase the [kv] indexed_kv_limit settings in limits.conf or set it to 0 to disable the limit.
What if you do the manual lookup with the lookup definition, not the raw CSV - as that's what the DM is doing. | lookup LOOKUP_DEFINITION sourcetype OUTPUT Application
Here's an example that extracts all the &#nnnn; sequences to a multivalue char field, which is then converted to the chars MV. It seems like the mvdedup can be used to remove duplicates in each case... See more...
Here's an example that extracts all the &#nnnn; sequences to a multivalue char field, which is then converted to the chars MV. It seems like the mvdedup can be used to remove duplicates in each case and the ordering appears to be preserved between the to MVs, so then the final foreach will replace that inside name | makeresults format=csv data="id,name 1,&#1040;&#1083;&#1077;&#1082;&#1089;&#1077;&#1081;" ``` Extract all sequences ``` | rex field=name max_match=0 "\&#(?<char>\d{4});" ``` Create the char array ``` | eval chars=mvmap(char, printf("%c", char)) ``` Remove duplicates from each MV - assumes ordering is preserved ``` | eval char=mvdedup(char), chars=mvdedup(chars) ``` Now replace each item ``` | eval c=0 | foreach chars mode=multivalue [ eval name=replace(name, "\&#".mvindex(char, c).";", <<ITEM>>), c=c+1 ] You could make this a macro and pass a string to the macro and the macro could do the conversion. Note that fixing the ingest is always the best option, but this can deal with any existing data. Assumes you're running Splunk 9
Hi @GIA Please check this search query (basically I have edited the 4 places(removing the "|")) | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values... See more...
Hi @GIA Please check this search query (basically I have edited the 4 places(removing the "|")) | tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic | search NOT (All_Traffic.src_ip [| inputlookup internal_ranges.csv ]) AND (All_Traffic.dest_ip [| inputlookup internal_ranges.csv ]) AND (All_Traffic.action="allow*") by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as src_ip OUTPUTNEW last_seen | append [| tstats summariesonly=true allow_old_summaries=true values(All_Traffic.dest_port) as dest_port values(All_Traffic.protocol) as protocol values(All_Traffic.action) as action values(sourcetype) as sourcetype from datamodel=Network_Traffic.All_Traffic where (All_Traffic.src_ip IN [| inputlookup internal_ranges.csv ]) AND NOT (All_Traffic.dest_ip IN [| inputlookup internal_ranges.csv ]) AND NOT (All_Traffic.protocol=icmp) by _time All_Traffic.src_ip All_Traffic.dest_ip | `drop_dm_object_name(All_Traffic)` | lookup ip_iocs.csv ioc as dest_ip OUTPUTNEW last_seen] | where isnotnull(last_seen) | head 51   to learn lookup commands, pls check https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/lookup the lookup, subsearch, append, tstats and datamodels... bit complex topics and it may take a long time for you to understand. pls dont loose hope. keep on learning, daily, bit by bit. hope you got it, thanks.   
Hi @indeed_2000 , yes probably parallelizing data in more than one index it's a good idea for increasing speed of search, but having an index for day it's a little exagerate and very difficoult to m... See more...
Hi @indeed_2000 , yes probably parallelizing data in more than one index it's a good idea for increasing speed of search, but having an index for day it's a little exagerate and very difficoult to manage! Eventually divide your data in two or three indexes, but not more! Ciao. Giuseppe  
I see that Splunk 9.0.4 might still have compatibility with Linux kernel 2.6 "linux-2.6". Despite Splunk's documentation indicating the deprecation of kernel 2.6 since Splunk Enterprise 8.2.9, the RP... See more...
I see that Splunk 9.0.4 might still have compatibility with Linux kernel 2.6 "linux-2.6". Despite Splunk's documentation indicating the deprecation of kernel 2.6 since Splunk Enterprise 8.2.9, the RPM package name ("splunk-9.0.4-de405f4a7979-linux-2.6-x86_64.rpm") for Splunk 9.0.4 suggests it is still built on and for this kernel version. Additionally, RHEL 6, which uses kernel 2.6, remains in extended support until June 30, 2024. This means Splunk 9.0.4 "can run" on kernel 2.6 under RHEL 6 which is still under extended supported. (wow!, 18 years of support for 2.6 kernel from RHEL) "https://download.splunk.com/products/splunk/releases/9.0.4/linux/splunk-9.0.4-de405f4a7979-linux-2.6-x86_64.rpm" The Splunk support page might not reflect this compatibility, so even though Splunk 9.0.4 package name might indicate that its built on linux kernel 2.6 and RHEL 6 is still under extended support, You might have a hard time making your case to your boss that this is a good idea, let alone to Splunk Support on this configuration. Cheers, Eddie
Also if timestampOfReception is the main timestamp of the event, it should be properly parsed as _time field of the event. It makes searching the events much, much quicker.
While technically one could think of a solution writing dynamically to a different index each day (you'd still need to have pre-created indexes though), Splunk is not elasticsearch so don't bring hab... See more...
While technically one could think of a solution writing dynamically to a different index each day (you'd still need to have pre-created indexes though), Splunk is not elasticsearch so don't bring habits from there over to Splunk. Splunk works differently and has other methods than elastic of storing data, indexing it and searching. And the summary indexing thing - well, if you want to summarize something, you must have something to summarize. So you're always summarizing over some period of time. That conflicts with summarizing "realtime". Anyway, even if you had the ability of creating some summary for - let's say - sliding window of 30 minutes "backwards", as soon as you'd summarize your data, that summary would be invalid due to new data incoming. So it makes no real sense.
There is a setting for roles in Splunk that configures what indexes are searched by default if an index is not specified in the search itself. This would be my guess is what is going on here if I und... See more...
There is a setting for roles in Splunk that configures what indexes are searched by default if an index is not specified in the search itself. This would be my guess is what is going on here if I understood your question correctly. The user's role that is utilizing the macro probably doesn't have the index set as a default searched index where the data resides. Here is a screenshot of the UI settings for roles default searched indexes.