All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi I need to press a button on a dashboard and for it to trigger a .sh script in my APP. However when I load the page it run the script before I press the button   <dashboard script="run_action.j... See more...
Hi I need to press a button on a dashboard and for it to trigger a .sh script in my APP. However when I load the page it run the script before I press the button   <dashboard script="run_action.js"> <label>Test Action</label> <row> <panel> <html> <button class="btn btn-primary button1">Run search!</button> </html> </panel> </row> </dashboard>     /data/apps/splunk/splunk/etc/apps/MxMonitor_MONITORING_MVP_BETA/appserver/static/run_action.js     require([ "jquery", "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!" ], function( $, SearchManager ) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: "| runshellscript Test_Script123.sh 1 1 1 1 1 1 1 1" }); $(".button1").on("click", function (){ var ok = confirm("Are you sure?"); if (ok){ mysearch.startSearch(); alert('attempted restart!'); } //else { // alert('user did not click ok!'); //} }); });       
We have  the Alfresco Application as a SaaS Application, we can get the logs through the Alfresco APis. Would like to know how can i have it ingested in to Splunk Cloud ..? I think there was an App,... See more...
We have  the Alfresco Application as a SaaS Application, we can get the logs through the Alfresco APis. Would like to know how can i have it ingested in to Splunk Cloud ..? I think there was an App, which is end of life, is there a way to get the logs to Splunk cloud..? through Heavy forwarders or other Add ons .   
Hello Splunkers. I have a question: we are now moving from old servers to new ones. We had 5 indexers, not clustered and now we are going to have 3 indexers. We want to move all indexed data to new... See more...
Hello Splunkers. I have a question: we are now moving from old servers to new ones. We had 5 indexers, not clustered and now we are going to have 3 indexers. We want to move all indexed data to new instances. I know the procedure of moving data, but I have an idea on how to make this process easier. Maybe you have tried this solution and know that it will or won't work.   The idea is to set data replication from e.g. OldIDX1 => NewIdx1, OldIDX2=>NewIdx2, OLDIDX3,4,5=>NewIdx3. Can you please advise whether such 'solution' or 'Workaround' will work or this needs to be tested before?   With best Regards, Gene
I getting indications that Splunk Ent. / ES was restarted. Is it possible to find when & by whom? Thank u very much for your response.
Greetings, I want to exclude search results if a field contains a value compared against another field with additional text added.  So it would look something like this: Field1=value Field2=Field1... See more...
Greetings, I want to exclude search results if a field contains a value compared against another field with additional text added.  So it would look something like this: Field1=value Field2=Field1+[text] Field3=[value2] Exclude results where Field2=Field1+[text] and Field3=[value2] Can anyone tell me what the syntax in Splunk would be?  Thanks.
Hi Team,    Cisco ESA Splunk add-on not working with latest release and logs . Can you please fix ? or deprecate if you dont want to support
So when we build a new server and install the Splunk forwarder on the server. Is there a way to script to add that new deployment client to a Server Class?
index=firewall* | table _time origin_sic_name service proto service_id src dst rule policy_name rule_name s_port action message_info xlatesrc xlatedst
Bonjour, J'ai activé le heavy forwarder sur mon Splunk server (8.0.6) afin de pouvoir forwarder des logs vers un serveur tiers avec qradar. Nous avons identifié 4 sourcetypes pour lesquels forwarde... See more...
Bonjour, J'ai activé le heavy forwarder sur mon Splunk server (8.0.6) afin de pouvoir forwarder des logs vers un serveur tiers avec qradar. Nous avons identifié 4 sourcetypes pour lesquels forwarder les logs (on ne veut pas tout forwarder). Or il apparaît que la machine cible reçoit toutes les logs. Je pensais que ma conf du outputs.conf ferait en sorte que seules les logs correspondants à ces sourcetypes seraient forwardées. Voici le contenu de mon fichier tcpouts.conf : [syslog] defaultGroup=syslogGroup [syslog:syslogGroup] server = 192.168.152.68:514 [syslog:192.168.152.68:514] syslogSourceType = f5:bigip:syslog syslogSourceType = bluecoat:proxysg:access:syslog syslogSourceType = FO_apache syslogSourceType = backbone_in   Quelqu'un saurait comment filtrer pour ne forwarder que les logs de ces 4 sourcetypes ? Merci,
Hello there,  I have a question, i'm trying to import a custom SVG map in cloropleth map visualisation in dashboard studio. I had a problem with a classic svg map, a map of france : The svg files ... See more...
Hello there,  I have a question, i'm trying to import a custom SVG map in cloropleth map visualisation in dashboard studio. I had a problem with a classic svg map, a map of france : The svg files contains elements like <g> parts, which seems to not work with dashboard studio, i had to edit directly the .svg file and delete the <g> everywhere. And also had to add "name=", "stroke-width=" "stroke=" and "fill=" in every <path>, without them, the visualisation wasn't working. I wonder if there is a specific version / format of SVG files to use with dashboard studios ? Even import the map in InkScape and exporting like in the example ( documentation : https://www.splunk.com/en_us/blog/platform/painting-with-data-choropleth-svg.html ) didn't worked.   The map of France i used ( before editing it ) : https://upload.wikimedia.org/wikipedia/commons/b/b6/D%C3%A9partements_de_France-simple.svg Best regards, 
Hi All I'm plotting a timechart graph when i run the search it shows the graph all fine as expected [see attached capture_1] however when i embed the graph in a dashboard as a panel it truncates the... See more...
Hi All I'm plotting a timechart graph when i run the search it shows the graph all fine as expected [see attached capture_1] however when i embed the graph in a dashboard as a panel it truncates the data [see attached capture_2]  As you see in capture 2 its showing a dip at the start when i embed this in dashboard panel whereas that dip is not actually there in the capture 1  any suggestions for this  capture 1 Capture _2    
Hi Everyone, I am using Splunk enterprise MSI file in my own windows laptop. So for practice i would like to install Splunk in Linux environment in same machine. As we know we can use putty or th... See more...
Hi Everyone, I am using Splunk enterprise MSI file in my own windows laptop. So for practice i would like to install Splunk in Linux environment in same machine. As we know we can use putty or third party tools to run Linux in windows at that point of time we need to install Splunk again. Is it possible? Or Do I need to uninstall Splunk in windows(.exe) file in order to install through Linux?
Out of the dataModels provided with Enterprise Security, one of the accelerated datamodel suddenly has a very high run time than usual . Any suggestion for what could be the issue Or where in the int... See more...
Out of the dataModels provided with Enterprise Security, one of the accelerated datamodel suddenly has a very high run time than usual . Any suggestion for what could be the issue Or where in the internal logs we should see to identify potential root cause for it ?
Hi, I want to see my data in the ES dashboard Security Domains -> Endpoint -> Endpoint Changes. I created the following things: props.conf with CIM compliant field aliases. eventtypes.conf [MyEv... See more...
Hi, I want to see my data in the ES dashboard Security Domains -> Endpoint -> Endpoint Changes. I created the following things: props.conf with CIM compliant field aliases. eventtypes.conf [MyEventType] search = index=MyIndex sourcetype=MySourcetype   tags.conf [eventtype=MyEventType] change=enabled endpoint=enabled I can successfully search the events with tag=change and tag=endpoint. I can also successfully search the data with the data model constraint (`cim_Change_indexes`) tag=change NOT (object_category=file OR object_category=directory OR object_category=registry) tag=endpoint. However, the dashboard stays empty. When I manually execute one of the dashboard searches | `tstats` append=T count from datamodel=Change.All_Changes where nodename="All_Changes.Endpoint_Changes" I get not results. When I change nodename="All_Changes.Endpoint_Changes"  to nodename="All_Changes" I see my events. So the question is, what do I need to do to get my events in the node All_Changes.Endpoint_Changes?  
Hey there Splunk hero's, Story/Background: So, there is this variable called "src_ip" in my correlation search. The "src_ip" is a more than 5000+ ip address. What i am doing is matching these i... See more...
Hey there Splunk hero's, Story/Background: So, there is this variable called "src_ip" in my correlation search. The "src_ip" is a more than 5000+ ip address. What i am doing is matching these ip address which should not be in a particular CIDR range using cidrmatch function which works prefectly. Which looks something like this : | where (NOT cidrmatch("34.20.223.128/25",src_ip) AND NOT cidrmatch("13.9.22.0/25",src_ip) AND NOT cidrmatch("13.56.21.18/25",src_ip) AND NOT cidrmatch("35.17.29.0/26",src_ip) AND NOT(many-more,src_ip))   SOLUTION REQUIRED: Now, coming to the part where i need your help is . I want to simply this. SOLUTION Tried: PART 1: Solutions which i have searched over the forum tell me to create a lookup table and look through it. So, I have created a lookup table named "match_cidr.csv". This csv/lookup file consist of more that 100+ CIDR blocks under a variable called cidr_match_src_ip. What i have tried looking into this via this command. there is a tstat command as well so, Query looks like this  [ | inputlookup match_cidr.csv | where src_ip != cidr_match_src_ip] ===> this won't work since i am comparing a CIDR to IP address directly. where NOT cidrmatch([| inputlookup match_cidr.csv], src) ==> tried this as well   What can i use here or what other things can you recommend me to do. Feel free to ask any more question to me if my message isn't clear      
Hi, I want to search    "xyzetc\";0,                   ---this is my string . Unable to search this exact pattern, Unbalanced Quotes error as it already has quotes  
I want to use the subsearch to get start and endtime of the newest transaction (here a botsession). The subsearch alone gives me: starttime=  09/01/2021:17:28:49 endtime= 09/01/2021:19:42:50 At f... See more...
I want to use the subsearch to get start and endtime of the newest transaction (here a botsession). The subsearch alone gives me: starttime=  09/01/2021:17:28:49 endtime= 09/01/2021:19:42:50 At first i used the subsearch without strftime() but Splunk said earliest/latest cant parse epochtime and that it wants format %m/%d/%Y:%H:%M:%S that brings me to my current search where splunk says "Invalid value "starttime" for time term 'earliest'" When i use the results of the subsearch when running alone it works. How can i make use of the start-/endtime? Or is there a better method to limit my mainsearch for the newest botsession? My Search (not the final search, but i want to work with the events from a specific session): index="fishingbot"   [search index=fishingbot   | transaction startswith="Anmeldung erfolgreich!" endswith="deaktiviert!"   | eval endtime=strftime((_time+duration), "%m/%d/%Y:%H:%M:%S")   | eval starttime=strftime(_time, "%m/%d/%Y:%H:%M:%S")   | top starttime endtime limit=1   | table starttime endtime] earliest=starttime latest=endtime
So, I have multiple ip addresses i want to combine them using regex or normal by supplying dashes and compare them to the variable. For eg: This is my existing query: | search NOT src IN (10.161.5... See more...
So, I have multiple ip addresses i want to combine them using regex or normal by supplying dashes and compare them to the variable. For eg: This is my existing query: | search NOT src IN (10.161.5.50 , 10.161.5.51,10.161.5.52, 10.161.5.53,10.161.10.20,192.168.1.120,192.168.1.130 ) I had an output of 15 matched output. What i have tried doing to get result is: | search NOT src IN ("10.161.5.5[0-3]", 10.161.10.20,192.168.1.120,192.168.1.130) Doing this lead to an increase in the matched query up to 30 results. Why was this happening and what can i do to prevent it. Any solutions?
Hi, Current piechart In the above piechart highlighted cities details are not displaying.have to use mouse over to check the details. Please let me know how to get hided values also in piechar... See more...
Hi, Current piechart In the above piechart highlighted cities details are not displaying.have to use mouse over to check the details. Please let me know how to get hided values also in piechart.   Regards, Madhusri R  
I'm trying to calculate percentages based on the number of events per vary group. There are actually a lot of events, so can't use method like count(eval(...)). The summary of events is as follows: ... See more...
I'm trying to calculate percentages based on the number of events per vary group. There are actually a lot of events, so can't use method like count(eval(...)). The summary of events is as follows:       color ------ green red greed greed red        Here's my search so far:       index="test" sourcetype="csv" | stats count as numColor by color | eval total=5 | eval percent=printf("%.2f", (numColor/total)*100) | sort num(percent) | table color numColor percent       How do I replace the hardcore variable value "total" with count() function or other methods? Any help would be appreciated.