All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

There are multiple sourcetypes in index="main". I'm trying to stats at SOURCETYPE number one and I need a field of sourcetype number two. Is there any way?
I've got some logs I need to join and put on the same row. I've tried a few different ways and searched the community but I can't seem to get exactly what I need. There's a log every 10 minutes for... See more...
I've got some logs I need to join and put on the same row. I've tried a few different ways and searched the community but I can't seem to get exactly what I need. There's a log every 10 minutes for each host and each drive on said hosts (there are a lot of hosts and drives). Each log has 2 events for the same time and drive letter. One for free MB and one for percent. Basically I need to join together each set of these two separate events based on the time, host and drive letter of the log. Is this possible?    base query: index=perfmon host=host1 Category="PERFORMANCE" collection="WIN_PERF" object="LogicalDisk" counter="% Free Space" OR counter="Free Megabytes"   Drive letter is extracted as "instance" percent and MB are both extracted as "Value"   Returns these logs: "09/02/2021 21:48:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","Free Megabytes","d:","36092.00" "09/02/2021 21:48:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","% Free Space","d:","41.47" "09/02/2021 22:08:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","% Free Space","C:","19.30" "09/02/2021 22:08:49","host1","PERFORMANCE","WIN_PERF","LogicalDisk","Free Megabytes","C:","19767.00"     Desired output:   Time                                                       Host       Drive      FreePercent     FreeGB          09/02/2021 21:48:49                host1          d:                41.47            36092.0  09/02/2021 22:08:49                 host1          C:                19.30            19767.00   Any help would be appreciated.    
I have a csv file query as follows :-  | inputlookup file_1.csv which gives the result as follows in a single line as a single field or column  A B C D E F G H i j k l m n o p q r s t u v w x N... See more...
I have a csv file query as follows :-  | inputlookup file_1.csv which gives the result as follows in a single line as a single field or column  A B C D E F G H i j k l m n o p q r s t u v w x Now, I want to turn the above result as follows with multiple fields naming A, B,C,D,E,F,G,H basically what I am trying to acheive is convert the single field into multiple fields with each field name or field value is extracted based on a space separation in the single field from above? A B C D E F G H i j k l m n o p q r s t u v w x    
Hi Splunkers - We are trying to create a dashboard with conditional panels that show/hide based on token values. Easy enough. But we are also attempting to use a Submit button, and it's not working a... See more...
Hi Splunkers - We are trying to create a dashboard with conditional panels that show/hide based on token values. Easy enough. But we are also attempting to use a Submit button, and it's not working as we would like. Currently, the conditional panels are showing/hiding when a user changes the value in the dropdown input, but we would like the panels to show/hide AFTER a user has hit the submit button. Is this possible? FYI, we are not using Dashboard Studio for this particular dashboard.
I have two events as below - event 1    "id=1 api=xyz apiResTime=50"   event 2   "id=1 api=xyz duration=200"   I want to plot the difference between duration and apiResTime by api. So fa... See more...
I have two events as below - event 1    "id=1 api=xyz apiResTime=50"   event 2   "id=1 api=xyz duration=200"   I want to plot the difference between duration and apiResTime by api. So far i have tried this   index="my_index" | search * "apiResponseTime"="*" | table "api", "apiResponseTime" | rename "api" as api1 | rename "apiResponseTime" as x | append [search * "duration"="*" | table "api", "duration" | rename "api" as api2 | rename "duration" as y ] | eval api_match=if(match(api1, api2),1,0) //match the apis | eval diff=if(api_match=1,y-x,y) // get the difference y-x on match | table api1, api2, diff   But this is not giving me the required results. Any suggestions / pointers on how I can plot (timechart) the difference between (duration-apiResponseTime) by api. The above events can occur for multiple ids.
Hello,   I currently have in production Splunk Enterprise 8.0 with Universal and Heavy Forwarder 8.0. I plan to upgrade to the latest version 8.2.2. Can I upgrade : 1. Upgrade Splunk Enterprise ... See more...
Hello,   I currently have in production Splunk Enterprise 8.0 with Universal and Heavy Forwarder 8.0. I plan to upgrade to the latest version 8.2.2. Can I upgrade : 1. Upgrade Splunk Enterprise servers (Master, Indexer, Search Head, Deployment, Licence server) first and then Forwarder later 2. Upgrade Forwarders first and Splunk Enterprise after 3. Both Splunk Enterprise and Forwarders at the same time
Hi all, I'm having some issues onboarding some new server logs into Splunk. These servers are RedHat 6 and 7 machines. I've gotten the Universal Forwarder agent installed onto them and dropped my de... See more...
Hi all, I'm having some issues onboarding some new server logs into Splunk. These servers are RedHat 6 and 7 machines. I've gotten the Universal Forwarder agent installed onto them and dropped my depolyment_client app into /opt/splunkforwarder/etc/apps directory (app has a deploymentclient.conf file in it). These servers connect to my DS fine, but then encounter an issue when trying to download the other two apps that are a part of some different serverclasses (one app is the splunk TA for linux and the other is an app that points to my indexers).  I saw on the splunkd log file on one of the machines I was getting this error: -0500 WARN HTTPClient [18097 HttpClientPollingThread_DD738BE1-8B55-41C7-B82B-A9348CA4DF30] - Download of file /opt/splunkforwarder/var/run/all_nix_hosts/nix_forwarder_outputs_ssl-1630327417.bundle failed with status 502 I have other Linux machines that have connected to the DS and recieved the apps perfectly fine and are sending data. I've also done Windows servers with their respective apps and no issues there. Any idea why this may be happening? 
I want to download the Splunk User Behavior Analytics OVA for testing purpose for a client.  After testing we will buy the license but i did not find any Splunk UBA OVA. can you please provide me th... See more...
I want to download the Splunk User Behavior Analytics OVA for testing purpose for a client.  After testing we will buy the license but i did not find any Splunk UBA OVA. can you please provide me the downloading link of Splunk UBA OVA. Thank you 
As part of a DR plan am writing need to be Alerted if an instance or Splunk Ent / ES has bee deleted / removed or disabled. I do have Monitoring Console in place in both. Is using MC to keep an eye o... See more...
As part of a DR plan am writing need to be Alerted if an instance or Splunk Ent / ES has bee deleted / removed or disabled. I do have Monitoring Console in place in both. Is using MC to keep an eye on such issues the omly way? Thank u in advance. I appreciate a response. 
Hello, I am a freshmen with splunk.  I got a problem trying to concat two/more searches into 1. pretty much my data looks like so  {      "TimeStamp": "\/Date(1630425120000)\/",      "Name":... See more...
Hello, I am a freshmen with splunk.  I got a problem trying to concat two/more searches into 1. pretty much my data looks like so  {      "TimeStamp": "\/Date(1630425120000)\/",      "Name": "Plan-MemoryPercentage-Maximum.json",       "Maximum": 14 } {      "TimeStamp": "\/Date(1630425120000)\/",      "Name": "Plan-MemoryPercentage-Average.json",       "Average": 14 } both sets will have the same timeStamp for the entries and I just want a table that will have the matching time stamps and a column for max and a column for avg so far I'm able to get a single table going that has a query that looks like  Name="Plan-MemoryPercentage-Maximum.json" | table * | fields TimeStamp, Maximum | fields - _time, _raw  but I'm really struggling to figure out how to concat 2 searches into 1 table anyone have any ideas?
Hi I need to press a button on a dashboard and for it to trigger a .sh script in my APP. However when I load the page it run the script before I press the button   <dashboard script="run_action.j... See more...
Hi I need to press a button on a dashboard and for it to trigger a .sh script in my APP. However when I load the page it run the script before I press the button   <dashboard script="run_action.js"> <label>Test Action</label> <row> <panel> <html> <button class="btn btn-primary button1">Run search!</button> </html> </panel> </row> </dashboard>     /data/apps/splunk/splunk/etc/apps/MxMonitor_MONITORING_MVP_BETA/appserver/static/run_action.js     require([ "jquery", "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!" ], function( $, SearchManager ) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: "| runshellscript Test_Script123.sh 1 1 1 1 1 1 1 1" }); $(".button1").on("click", function (){ var ok = confirm("Are you sure?"); if (ok){ mysearch.startSearch(); alert('attempted restart!'); } //else { // alert('user did not click ok!'); //} }); });       
We have  the Alfresco Application as a SaaS Application, we can get the logs through the Alfresco APis. Would like to know how can i have it ingested in to Splunk Cloud ..? I think there was an App,... See more...
We have  the Alfresco Application as a SaaS Application, we can get the logs through the Alfresco APis. Would like to know how can i have it ingested in to Splunk Cloud ..? I think there was an App, which is end of life, is there a way to get the logs to Splunk cloud..? through Heavy forwarders or other Add ons .   
Hello Splunkers. I have a question: we are now moving from old servers to new ones. We had 5 indexers, not clustered and now we are going to have 3 indexers. We want to move all indexed data to new... See more...
Hello Splunkers. I have a question: we are now moving from old servers to new ones. We had 5 indexers, not clustered and now we are going to have 3 indexers. We want to move all indexed data to new instances. I know the procedure of moving data, but I have an idea on how to make this process easier. Maybe you have tried this solution and know that it will or won't work.   The idea is to set data replication from e.g. OldIDX1 => NewIdx1, OldIDX2=>NewIdx2, OLDIDX3,4,5=>NewIdx3. Can you please advise whether such 'solution' or 'Workaround' will work or this needs to be tested before?   With best Regards, Gene
I getting indications that Splunk Ent. / ES was restarted. Is it possible to find when & by whom? Thank u very much for your response.
Greetings, I want to exclude search results if a field contains a value compared against another field with additional text added.  So it would look something like this: Field1=value Field2=Field1... See more...
Greetings, I want to exclude search results if a field contains a value compared against another field with additional text added.  So it would look something like this: Field1=value Field2=Field1+[text] Field3=[value2] Exclude results where Field2=Field1+[text] and Field3=[value2] Can anyone tell me what the syntax in Splunk would be?  Thanks.
Hi Team,    Cisco ESA Splunk add-on not working with latest release and logs . Can you please fix ? or deprecate if you dont want to support
So when we build a new server and install the Splunk forwarder on the server. Is there a way to script to add that new deployment client to a Server Class?
index=firewall* | table _time origin_sic_name service proto service_id src dst rule policy_name rule_name s_port action message_info xlatesrc xlatedst
Bonjour, J'ai activé le heavy forwarder sur mon Splunk server (8.0.6) afin de pouvoir forwarder des logs vers un serveur tiers avec qradar. Nous avons identifié 4 sourcetypes pour lesquels forwarde... See more...
Bonjour, J'ai activé le heavy forwarder sur mon Splunk server (8.0.6) afin de pouvoir forwarder des logs vers un serveur tiers avec qradar. Nous avons identifié 4 sourcetypes pour lesquels forwarder les logs (on ne veut pas tout forwarder). Or il apparaît que la machine cible reçoit toutes les logs. Je pensais que ma conf du outputs.conf ferait en sorte que seules les logs correspondants à ces sourcetypes seraient forwardées. Voici le contenu de mon fichier tcpouts.conf : [syslog] defaultGroup=syslogGroup [syslog:syslogGroup] server = 192.168.152.68:514 [syslog:192.168.152.68:514] syslogSourceType = f5:bigip:syslog syslogSourceType = bluecoat:proxysg:access:syslog syslogSourceType = FO_apache syslogSourceType = backbone_in   Quelqu'un saurait comment filtrer pour ne forwarder que les logs de ces 4 sourcetypes ? Merci,
Hello there,  I have a question, i'm trying to import a custom SVG map in cloropleth map visualisation in dashboard studio. I had a problem with a classic svg map, a map of france : The svg files ... See more...
Hello there,  I have a question, i'm trying to import a custom SVG map in cloropleth map visualisation in dashboard studio. I had a problem with a classic svg map, a map of france : The svg files contains elements like <g> parts, which seems to not work with dashboard studio, i had to edit directly the .svg file and delete the <g> everywhere. And also had to add "name=", "stroke-width=" "stroke=" and "fill=" in every <path>, without them, the visualisation wasn't working. I wonder if there is a specific version / format of SVG files to use with dashboard studios ? Even import the map in InkScape and exporting like in the example ( documentation : https://www.splunk.com/en_us/blog/platform/painting-with-data-choropleth-svg.html ) didn't worked.   The map of France i used ( before editing it ) : https://upload.wikimedia.org/wikipedia/commons/b/b6/D%C3%A9partements_de_France-simple.svg Best regards,