All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I don't have any application, but when I'm going to create one, "Unable to create application" is shown. Is there any way for Lite users to contact the support desk? Name turing202002042221... See more...
I don't have any application, but when I'm going to create one, "Unable to create application" is shown. Is there any way for Lite users to contact the support desk? Name turing202002042221105 Global Account Name turing202002042221105_1fc24c8b-c7b0-40ec-bea8-0f07ed51eea1 Edition AppDynamics Lite Access Key   Expiration Date Never
Hi, i have been looking but cant seem to make much sense of it all. im new to splunk. im trying to create a search and alert from a csv file, the csv fiel contains Domain Admin account and i wanted... See more...
Hi, i have been looking but cant seem to make much sense of it all. im new to splunk. im trying to create a search and alert from a csv file, the csv fiel contains Domain Admin account and i wanted to creat a search for a numbers of eventid on those domain admin accounts. index=win sourcetype=wineventlog EventCode=*the events im looking for* | inputlookup file.csv   but cant seem to make it work.   any help would be great
Hi all, I'm a Splunk beginner and I'm having a hard time getting this particular search down. My objective is to get the "Account_Name" field from 2 different event codes (4624 type 10 & 4778).  Thi... See more...
Hi all, I'm a Splunk beginner and I'm having a hard time getting this particular search down. My objective is to get the "Account_Name" field from 2 different event codes (4624 type 10 & 4778).  This issue is I can't figure out how to get both the 2nd instance of Account_Name for only the 4624, but the first instance of it in the 4778.  This is because windows uses the Account_Name field twice in a lot of logs, but not in some.  So I need the first Account_Name in 4778, and the second Account_Name in 4624. Here is what I have so far.  Having trouble putting in that middle piece. index=main source="WinEventLog:Security" ("eventcode=4624" AND Logon_Type=10) | eval Acct=mvindex(Account_Name,1) ***Also find "eventcode=4778" Account_Name**** | rename Acct as "Account Used on Remote Machine" | rename Client_Name as "Source Machine" | rename ComputerName as "Destination Machine" | timechart count by "Account Used on Remote Machine"  
  Hello, I have many windows machines sending logs through the agent to index = main With what query can I monitor either from a dashboard or from an alert when one of these machines stops sending ... See more...
  Hello, I have many windows machines sending logs through the agent to index = main With what query can I monitor either from a dashboard or from an alert when one of these machines stops sending logs after an interval of 24 hours? note: I don't have a deployment server  
Hello has anyone had any experience with setting up the coldToFrozenScript to archive to 2 different locations?
How to collect Windows event code 403 from all my windows servers in Splunk Enterprise via web interface? Thank u
Hello all,  I currently have the following data set, and a table will look like this: Test Iteration Results Test1 1 400 Test1 2 500 Test1 3 600 Test2 0 1000 Test2 1 5... See more...
Hello all,  I currently have the following data set, and a table will look like this: Test Iteration Results Test1 1 400 Test1 2 500 Test1 3 600 Test2 0 1000 Test2 1 500 Test2 2 1000 Test2 3 2000   We run a test several times and save the results for each time.  What I need to do is to calculate iteration 0 for the tests that don't have it (test1), which will be a median of all other iterations available. What I want to do is add a new for, with the new value:  Test Iteration Results Test1 0 500 Test1 1 400 Test1 2 500 Test1 3 600 Test2 0 1000 Test2 1 500 Test2 2 1000 Test2 3 2000   It needs to add only the iteration 0 for those tests that doesnt have it, and ignore the other cases. I've tried using appendpipe + eventstats, but It only rewrites the Iteration and Value fields: |appendpipe [ |eventstats median(Results) as Results, first(Test) as Test,  | eval iteration=0 ] I would like to get some ideas on how to do this.  Any help will be appreciated, thank you in advance. 
Well Hello Gorgeous people!   I have a fields that can take anywhere from 3 to 5 diferente values which are cities... this field is called "CITY" if I want to get the percentage of each city over t... See more...
Well Hello Gorgeous people!   I have a fields that can take anywhere from 3 to 5 diferente values which are cities... this field is called "CITY" if I want to get the percentage of each city over the total count I always Have to do something like this:   | stats count(eval(CITY="A")) as CITY_A, count(eval(CITY="B")) as CITY_B, count(eval(CITY="C")) as CITY_C, count(CITY) as TOTAL | eval %P_CITY_A=CITY_A/TOTAL (repeat for each city)   but often times I find myself wanting to calculate percetages of the values of fields that can have up to 15 differente values.. and I just wonder is there is a faster more effcient way of doing this.... thank you so much people love, cindy
Hi. I am trying to run the Build a Custom Visualization tutorial from https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/CustomVizTutorial#Tutorial_overview . But when I use "npm run buil... See more...
Hi. I am trying to run the Build a Custom Visualization tutorial from https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/CustomVizTutorial#Tutorial_overview . But when I use "npm run build", I receive this error [webpack-cli] SyntaxError: Invalid regular expression: /(\p{Uppercase_Letter}+|\p{Lowercase_Letter}|\d)(\p{Uppercase_Letter}+)/: Invalid escape. I have been following the steps closely. I believe this may be an issue due to package versions. Any help would be appreciated. Thanks.
Hello, It is the first time that I am going to use this command and the truth is I am a bit confused even though I have read about this command but the truth is not clear to me. I have windows mach... See more...
Hello, It is the first time that I am going to use this command and the truth is I am a bit confused even though I have read about this command but the truth is not clear to me. I have windows machines with the agent installed sending logs to index = main I have a file in .csv with the inventory of all the machines I need to get a list of the hosts that have never reported logs, either because the agent has not been installed yet or because I report logs and at a certain point I stop doing it I installed the "lookup Editor" and already uploaded the inventory there. Using the query | inputlookup hostinventory.csv I already get inventory information But I need to make a comparison of the hosts that the index = main sees that report or have reported logs vs the inventory csv file to get an idea of ​​which hosts are reporting and which ones are not. in the host inventory file I have a column called "host" I need to buy it with the "host" field from the index main. index=main |stats count by ComputerName  (brings me the hosts that are reporting logs from the index main) | inputlookup hostinventory.csv (brings me the logs that are in the csv file in the field called "host") but I cannot correlate the two sources of information to get a list of which hosts are reporting and which are not reporting logs
Need your expert advice about Splunk Ent. & Enterprise Security (ES) Backups + Disaster Recover + HA advice please. Any steps to succeed in this project specially in AWS environment is appreciated. T... See more...
Need your expert advice about Splunk Ent. & Enterprise Security (ES) Backups + Disaster Recover + HA advice please. Any steps to succeed in this project specially in AWS environment is appreciated. Thank u in advance
Trying to enable startup on SLES 15.1: /opt/splunkforwarder/bin # ./splunk enable boot-start Unknown option: levels usage: chkconfig -A|--allservices (together with -l: show all services) chkcon... See more...
Trying to enable startup on SLES 15.1: /opt/splunkforwarder/bin # ./splunk enable boot-start Unknown option: levels usage: chkconfig -A|--allservices (together with -l: show all services) chkconfig -t|--terse [names] (shows the links) chkconfig -e|--edit [names] (configure services) chkconfig -s|--set [name state]... (configure services) chkconfig -l|--list [--deps] [names] (shows the links) chkconfig -L|--liston [--deps] [names] (as -l, enabled in at least 1 level) chkconfig -c|--check name [state] (check state) chkconfig -a|--add [names] (runs insserv) chkconfig -d|--del [names] (runs insserv -r) chkconfig -h|--help (print usage) chkconfig -f|--force ... (call insserv with -f)chkconfig [name] same as chkconfig -t chkconfig name state... same as chkconfig -s name state chkconfig --root=<root> ... use <root> as the root file system
I am working with a stats table with 7 fields. | tstats count as "f" where a=* b=*  c=* d=* e=*  by a b c d e | stats   sum(f) as f   list(f) as f_list   max(f) as f_max   list(c) as c_list ... See more...
I am working with a stats table with 7 fields. | tstats count as "f" where a=* b=*  c=* d=* e=*  by a b c d e | stats   sum(f) as f   list(f) as f_list   max(f) as f_max   list(c) as c_list   list(d) as d_list   list(e) as e_list   by b I would like to be able to take:       b's f_max and match it to the correlating value in b_list, c_list, d_list, e_list, f_list Anyone able to provide the SPL for this type of search?
Hello, I need to replace the deployer on a shcluster but I can't find any procedure on the documentation to do it. Does anyone know where I can find some documentation to make this replacement? Th... See more...
Hello, I need to replace the deployer on a shcluster but I can't find any procedure on the documentation to do it. Does anyone know where I can find some documentation to make this replacement? Thanks in advance  
Hi All, I'm working on a search, where I currently have the following: ..base search.. | table static_name, static_time, static_title, static_owner,  static_id, static_description   Apart from s... See more...
Hi All, I'm working on a search, where I currently have the following: ..base search.. | table static_name, static_time, static_title, static_owner,  static_id, static_description   Apart from static_title, static_time, static_id, the other fields are dynamic (they change as the search runs with different inputs. static_owner gets replaced by file_name, other fields like fqdn, process, event_hash etc get added to the search)   What I'm trying to achieve is that the static values, which always are present in the search, should be shown as separate fields, while dynamic fields are merged into a new, multivalued field called Combined_Field, irrespective of the new fields showing up. They should be merged into them. | table static_time, static_owner, static_id, Combined_Field (a multivalued field, comprising the values of all the dynamic fields)   ..base search.. | table static_name, static_time, static_title, static_owner,  static_id, static_description | eval Combined_Field = null() | foreach * [ eval Combined_Field=if('<<field>>'==static_time OR '<<field>>'==static_owner OR '<<field>>'==static_id, '<<field>>', mvappend('<<field>>', Combined_Field)) ] | table static_time, static_owner, static_id, Combined_Field The Combined_Field always remains empty. Could anyone check and let me know as to what am I doing wrong. Or if this can be achieved via a different approach. I've always tried doing the foreach command with case instead of if, no luck.   Thank you in advance, S
Hi Team, I have a simple requirement but unable to get it. I am using a query index=tms sourcetype=kafka type=ssh| stats count by NETYPE For above query I am getting output as below NETYPE Co... See more...
Hi Team, I have a simple requirement but unable to get it. I am using a query index=tms sourcetype=kafka type=ssh| stats count by NETYPE For above query I am getting output as below NETYPE Count CISCO 100 JUNIPER 200   However, I want output as CISCO JUNIPER 100 200   Can you please help how can I get above o/p table
Hi Splunkers.. my requirement is to keep the selection in a column chart stay highlighted unless different choice made in the same chart. I have tried  link his approach given by @niketn, I una... See more...
Hi Splunkers.. my requirement is to keep the selection in a column chart stay highlighted unless different choice made in the same chart. I have tried  link his approach given by @niketn, I unable to identify relevant component for the chart to td or tr for highlighting. If someone already has an idea please help or aware of a chart supports these behavior selection-highlight out of the box please recommend. Thanks
Hello! I am using the Splunk Add-on for ServiceNow to retrieve data from standard ServiceNow tables, in particular the "sc_req_item" table for service requests.  The form has a custom field that I n... See more...
Hello! I am using the Splunk Add-on for ServiceNow to retrieve data from standard ServiceNow tables, in particular the "sc_req_item" table for service requests.  The form has a custom field that I need to extract and it has an internal id of "variables.4d1b6d291b2fd4d025d5eb186e4bcb19". When calling the ServiceNow REST API with Postman I can retrieve the field without issue: When trying to retrieve the same field using the Splunk Add-on for ServiceNow I get the message: "Values should begin with a letter or '$', and can include '_' and numbers, but got the value: variables.4d1b6d291b2fd4d025d5eb186e4bcb19" I can sync all the fields I need through the connector with the exception of the custom which has a . (period) in the name.  I have tried using quotes and replacing . (period) with _ (underscore) but I cannot get the field synced.  Here is the input configuration for reference: Is what I am trying to do possible with the Splunk Add-on for ServiceNow? Thanks! Andrew
Hi, I would like to ask you, of there is some possibility order column based on requirement. Case:   <search> |eval lower_raw = lower(_raw) |rex field=lower_raw "^.*d=(?<opentask>[0-9]+).*" |re... See more...
Hi, I would like to ask you, of there is some possibility order column based on requirement. Case:   <search> |eval lower_raw = lower(_raw) |rex field=lower_raw "^.*d=(?<opentask>[0-9]+).*" |rex field=lower_raw "^.*pm\s(?<trace>[0-9a-z-]+).*" |rex field=lower_raw "^.*taskid=(?<opentask>[0-9]+).*" |rex field=lower_raw "^.*uuid=(?<trace>[0-9a-z-]+).*" | eval task=opentask ."_".trace | transaction task | eval timedelay=case(duration>=0 AND duration<2,"1 sec",duration>=2 AND duration<6,"2-5 sec",duration>=6 AND duration<11,"6-10 sec",duration>=11,"11 and more sec",1=1,"error") | timechart span=10m count avg(duration) as avg by timedelay | sort by _time timedelay desc     I would like to have sorted by group (count event) and AVG duration I mean, first column time (ok now) second will be "count: 1sec" third: "avg: 1sec" forth: "count: 2-5sec" fifth: "avg: 2-5sec"   etc. Current it looks like this which is not nice expectation:   Thank you
Hello, I have 5 Servers. Every server has an actual count of user sessions. I want them to sum up, without loosing the trend funcion in the single value chart.  This is my actual query:     inde... See more...
Hello, I have 5 Servers. Every server has an actual count of user sessions. I want them to sum up, without loosing the trend funcion in the single value chart.  This is my actual query:     index=ascrm sourcetype=jmx NumUiSessions=* host IN (z1il0095*,z1il0096*,z1il0097*,z1il0098*) | stats latest(NumUiSessions) as latest_NumUiSessions by host | stats sum(latest_NumUiSessions) AS UISessions      Could anyone give me a clou? Best regards Benjamin