All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Edit: I am using the correct token, it's my instance that's wrong. This instance is on my local machine for easy development. My other two servers work fine. I moved my app over to my test server, an... See more...
Edit: I am using the correct token, it's my instance that's wrong. This instance is on my local machine for easy development. My other two servers work fine. I moved my app over to my test server, and it works fine. Building my own rest handler which accesses storage_passwords. I assume I'm using the wrong token? I don't know the structure of "self" here.   import splunk, base64, sys, os, time, json, re, shutil, subprocess, platform, logging, logging.handlers from splunk.persistconn.application import PersistentServerConnectionApplication import splunklib.client as client class req(PersistentServerConnectionApplication): def __init__(self, command_line, command_arg): PersistentServerConnectionApplication.__init__(self) def handle(self, in_string): authtoken = json.loads(in_string)["session"]["authtoken"] logger.debug(authtoken) service = client.connect(token=authtoken) storage_passwords = service.storage_passwords       Hardcoding the credentials, I just get "login failed". I'm thinking this is related to my other post https://community.splunk.com/t5/Dashboards-Visualizations/Credentials-not-accepted-for-port-8089/m-p/530014#M35981. Browsing 8089, I also cannot log on with any credential.
I have syslog server and installed HF, when send logs from HF to indexer, the host is represent base on Event host, can we extract new field for HF hostname?
Hello, So I am having some trouble parsing this json file to pull out the nested contents of the 'licenses'.  My current search can grab the contents of the inner json within 'features' but not th... See more...
Hello, So I am having some trouble parsing this json file to pull out the nested contents of the 'licenses'.  My current search can grab the contents of the inner json within 'features' but not the nested 'licenses' portion. My current search looks like this:       index=someindex | fields features.*.* | rename features.* as * | eval FieldList="" | foreach * [ eval FieldList=if("<<MATCHSTR>>"!="FieldList",FieldList.","."<<MATCHSTR>>","") ] | eval FieldList=split(FieldList,",") | mvexpand FieldList | eval Software=mvindex(split(FieldList,"."),0),Column=mvindex(split(FieldList,"."),1) | eval value="" | foreach * [ eval value=if("<<FIELD>>"==Software.".".Column,'<<FIELD>>',value),{Column}=value ]        sample json file: "features": { "M_TOOL": { "licenses": [], "num_issued": 40, "num_used": 0, "num_available": 40, "parse_status": "SUCCESS", "parse_error": null }, "M_GUI": { "licenses": [], "num_issued": 40, "num_used": 0, "num_available": 40, "parse_status": "SUCCESS", "parse_error": null }, "MT_GUI": { "licenses": [], "num_issued": 40, "num_used": 0, "num_available": 40, "parse_status": "SUCCESS", "parse_error": null }, "M_TOOL": { "licenses": [], "num_issued": 40, "num_used": 0, "num_available": 40, "parse_status": "SUCCESS", "parse_error": null }, "ML_GUI": { "licenses": [], "num_issued": 40, "num_used": 0, "num_available": 40, "parse_status": "SUCCESS", "parse_error": null }, "C_SOLVTOOL_Ser": { "licenses": [], "num_issued": 40, "num_used": 0, "num_available": 40, "parse_status": "SUCCESS", "parse_error": null }, "CP_SOLVTOOL_Par": { "licenses": [], "num_issued": 600, "num_used": 0, "num_available": 600, "parse_status": "SUCCESS", "parse_error": null }, "CD_SOLVTOOL_Ext": { "licenses": [], "num_issued": 20000, "num_used": 0, "num_available": 20000, "parse_status": "SUCCESS", "parse_error": null }, "C_SOLV_Ser": { "licenses": [ { "version": , "vendor_daemon": "mcomp", "expiration_date": "2021-08-31", "type": "floating", "parse_status": "SUCCESS", "parse_error": null } ], "num_issued": 40, "num_used": 16, "num_available": 24, "parse_status": "SUCCESS", "parse_error": null } } Ideally I'd like to put the contents into some table to start vendor_daemon expiration_date type parse_status parse_error mcomp 2021-08-31 floating SUCCESS null Thank you so much! Appreciate any and all help!
Is anyone using Splunk for their fraud monitoring efforts, whether it be for transactional monitoring of activity, alerting, or anything else?
Hello all,   I am a newer Splunk user and I am trying to sort the following rows: Level: Low Moderate High Null Total   But I would like for it to look like this: High Moderate Low Null... See more...
Hello all,   I am a newer Splunk user and I am trying to sort the following rows: Level: Low Moderate High Null Total   But I would like for it to look like this: High Moderate Low Null Total
When I first setup Splunk on my local machine (Playing around with it as I learn it), I could search for '*' and get shown all events for the time period. Then suddenly, I got no data back. I thought... See more...
When I first setup Splunk on my local machine (Playing around with it as I learn it), I could search for '*' and get shown all events for the time period. Then suddenly, I got no data back. I thought my events were lost. After lots of container deletes and creation ( Running in Docker), I eventually realised that my data was their, but ONLY when i searched for it exactly. What is going on? This is an image of four searches, but only the specific ones return data. Why has '*' stopped working? https://imgur.com/a/CYBJcHJ
This does not work. We have multiple agents that say not assigned and will not delete and since we don't have a server license I cannot associate them with anything to delete.  Note: This post was... See more...
This does not work. We have multiple agents that say not assigned and will not delete and since we don't have a server license I cannot associate them with anything to delete.  Note: This post was split off from an existing post Need to remove Appdynamics Agents permanently in On premises  ^ Edited by @Ryan.Paredez I added the link to the old post and a minor title change.
We have some users asking for Notable Events and emails depending on search results. Example...If the number of errors returned the last 5 minutes is < 5, send an email.  If > 5 allow notable event ... See more...
We have some users asking for Notable Events and emails depending on search results. Example...If the number of errors returned the last 5 minutes is < 5, send an email.  If > 5 allow notable event to be generated. I don't want to create 2 searches for this (alert and correlation search).  Is it possible to write 1 search to  accomplish this?
Hi, I'm using the Gantt chart visualization from the Gantt chart app to show the status of batch processes. Currently if I click the gantt chart it takes me to the search query instead I want to ope... See more...
Hi, I'm using the Gantt chart visualization from the Gantt chart app to show the status of batch processes. Currently if I click the gantt chart it takes me to the search query instead I want to open another gantt chart. The problem now is I'm having a hard time getting the Gantt chart to drilldown into another Gantt chart when I click the bar(as shown in below image) of the gantt chart. What I'm trying to achieve is to have the user be taken to another Gantt chart that shows the history of the process (say part 7 days) of the bar he clicked on in the original Gantt chart.  Any lead or reference would help Below is the main gantt chart which I have, when i click on the bar it should drilldown to another gantt chart and pass the process as token so the new gantt chart where I have the history of processes for past 7 days.   <form script="gantt:autodiscover.js"> <label>Process Monitoring</label> <fieldset submitButton="false"> <input type="dropdown" token="tko_posting_date" searchWhenChanged="true"> <label>Posting Date</label> <search> <query>| gentimes start=-31 end=1 | sort -starttime | eval date=strftime(starttime,"%d-%m-%Y %a") | eval pdate=strftime(starttime,"%Y%m%d") | eval today=strftime(now(), "%d-%m-%Y %a") | eval yesterday=strftime(relative_time(now(),"-1d@d"), "%d-%m-%Y %a") | eval adate=case(date = today, "Today", date = yesterday, "Yesterday", 1=1 , date) | eval sortd=case(date = today, "1", date = yesterday, "2", 1=1 , "0") | eval endtime=endtime+43200 | table pdate adate sortd starttime endtime | sort sortd, pdate desc</query> </search> <fieldForLabel>adate</fieldForLabel> <fieldForValue>pdate</fieldForValue> <selectFirstChoice>true</selectFirstChoice> <change> <set token="tko_set_earliest">$row.starttime$</set> <set token="tko_set_latest">$row.endtime$</set> </change> </input> <input type="dropdown" token="tko_inst" searchWhenChanged="true"> <label>Select Institution</label> <fieldForLabel>institution_name</fieldForLabel> <fieldForValue>institution_number</fieldForValue> <search> <query>| inputlookup institution_name.csv | table institution_name institution_number</query> <earliest>-15m</earliest> <latest>now</latest> </search> <default></default> <initialValue>*</initialValue> <choice value="*">ALL</choice> </input> <input type="dropdown" token="tko_emins" searchWhenChanged="true"> <label>Elapsed Mins</label> <choice value="1">Non-Zero Values</choice> <choice value="0">All Values</choice> <default>1</default> </input> </fieldset> <row> <panel> <html> <h2>Gantt Chart demo</h2> <div id="demo-search" class="splunk-manager" data-require="splunkjs/mvc/searchmanager" data-options="{ &quot;search&quot;: { &quot;type&quot;: &quot;token_safe&quot;, &quot;value&quot;: &quot;| inputlookup jobs.csv | search institution_number=$$tko_inst$$ posting_date=$$tko_posting_date$$ duration&gt;=$$tko_emins$$ &quot; }, &quot;earliest_time&quot;: { &quot;type&quot;: &quot;token_safe&quot;, &quot;value&quot;: &quot;$$tko_set_earliest$$&quot; }, &quot;latest_time&quot;: { &quot;type&quot;: &quot;token_safe&quot;, &quot;value&quot;: &quot;$$tko_set_latest$$&quot; }, &quot;cancelOnUnload&quot;: true, &quot;preview&quot;: true }"> </div> <div id="demo-view" class="splunk-view" data-require="app/gantt/components/gantt/gantt" data-options="{ &quot;managerid&quot;: &quot;demo-search&quot;, &quot;startField&quot;: &quot;startField&quot;, &quot;durationField&quot;: &quot;duration&quot;, &quot;categoryLabel&quot;: &quot;Process&quot;, &quot;categoryField&quot;: &quot;process&quot; }"> </div> </html> </panel> </row> </form>   Any thoughts?
Hi All,   Can someone please assist me. Presently I have Heavy forwarder in AIX ver 6.1 sending data to Indexers in Linux ver 7.3.0. Now I am upgrading Splunk instances on Both the OS. My concern i... See more...
Hi All,   Can someone please assist me. Presently I have Heavy forwarder in AIX ver 6.1 sending data to Indexers in Linux ver 7.3.0. Now I am upgrading Splunk instances on Both the OS. My concern is will Linux Indexers on ver 8.0.4 support Heavy forwarder in AIX. Presently I have no issue but I am worried once I upgrade to 8.0.4   PS :Splunk Enterprise doesn't support AIX anymore.
Hello SMEs, Seeking support to eval new field from two already being extracted one. I have bytes_received & bytes_sent fields. Wanted to have one more field (total_bytes) which will have addition o... See more...
Hello SMEs, Seeking support to eval new field from two already being extracted one. I have bytes_received & bytes_sent fields. Wanted to have one more field (total_bytes) which will have addition of both    eval total_bytes = bytes_received + bytes_sent Please suggest
Hi! I’m noticing very different SPL and thus different performance between the NMON Summary Light Analysis dashboard, the Top 20 processes CPU Statistics panel specifically, vs the NMON Analyser LIN... See more...
Hi! I’m noticing very different SPL and thus different performance between the NMON Summary Light Analysis dashboard, the Top 20 processes CPU Statistics panel specifically, vs the NMON Analyser LINUX dashboard, Process, Kernel, I/O Statistics, Top, CPU Usage per logical core. When I compare the 2 dashboards, in terms of results, they look identical to me. But the performance isn’t the same – the one from the Light Analysis is about 4 times slower. I am wondering why and if it’s normal. Check it out: From NMON Summary Light Analysis: SPL: | mstats max(_value) as value where `nmon_metrics_index` metric_name="os.unix.nmon.processes.top.pct_CPU" host=myhostby host, metric_name, dimension_Command, dimension_PID span=1m | stats sum(value) as pct_CPU by _time, host, metric_name, dimension_Command | appendcols [ | mstats latest(_value) as logical_cpus where `nmon_metrics_index` metric_name="os.unix.nmon.cpu.cpu_all.logical_cpus" host=myhost by host ] | appendcols [ | mstats latest(_value) as virtual_cpus where `nmon_metrics_index` metric_name="os.unix.nmon.cpu.cpu_all.virtual_cpus" host=myhost by host ] | filldown logical_cpus, virtual_cpus | stats values(pct_CPU) as pct_CPU, values(logical_cpus) as logical_cpus, values(virtual_cpus) as virtual_cpus by _time, host, dimension_Command | eval usage_per_core=(pct_CPU/100), smt_threads=(logical_cpus/virtual_cpus) | eval usage_per_core=case(isnum(smt_threads) AND smt_threads>="2", usage_per_core*1.2, isnum(smt_threads) AND smt_threads>="4", usage_per_core*1.4, isnum(usage_per_core), usage_per_core) | timechart `nmon_span` useother=f limit="20" max(usage_per_core) as "CPU Usage per core" by dimension_Command Runtime: This search has completed and has returned 364 results by scanning 533,512 events in 3.937 seconds And from NMON Analyser Linux Dashboard   SPL | mstats max(_value) as value where `nmon_metrics_index` metric_name="os.unix.nmon.processes.top.pct_CPU" host="myhost" by dimension_Command dimension_PID span=1m | stats sum(value) as pct_CPU by _time, dimension_Command | eval usage_per_core=(pct_CPU/100) | timechart `nmon_span` useother=f limit="50" max(usage_per_core) as "CPU Usage per core" by dimension_Command Runtime: This search has completed and has returned 364 results by scanning 533,512 events in 1.36 seconds Again, identical results, but very different performance and different SPL – which is most likely the cause of the different performance. Thoughts? Thanks! @guilmxm
Hi all. I have Symantec Endpoint Protection Manager and troubleshooting the splunk Malware Datamodel. I am trying to determine what exactly constitutes an event as malware.  I've already gone throu... See more...
Hi all. I have Symantec Endpoint Protection Manager and troubleshooting the splunk Malware Datamodel. I am trying to determine what exactly constitutes an event as malware.  I've already gone through this link about the CIM for malware but it doesn't answer my question.  Basically I have a minor risk event from SEP but that event did not trigger in a correlation search which is  searching from a datamodel "malware".   I'll attach screenshots of the datamodel. I'll attach a screenshot of the datamodel. I'm assuming my event didn't match because it was not tagged as malware as per the constraint of the dataset.  My question is, where can I find the criteria of this tag? Hope that makes sense.
I'm trying to create a query where I get results of a specific user triggering two of the same alerts. Is there a way to set 'stats count by' to equal 2, that results will show only users that have t... See more...
I'm trying to create a query where I get results of a specific user triggering two of the same alerts. Is there a way to set 'stats count by' to equal 2, that results will show only users that have triggered this alert twice? Or is there a specific command that will allow me to do this? index=`email` action=blocked | stats count by user_ID
  I need to install this APP but I don't know if I should install it in the indexers, in the search heads or in both
Hello, This is my architecture : dedicated indexers (multiple servers on main site) dedicated search head (1 serveron main site) dedicated management server (1 server on main site) dedicated sy... See more...
Hello, This is my architecture : dedicated indexers (multiple servers on main site) dedicated search head (1 serveron main site) dedicated management server (1 server on main site) dedicated syslog/forwarders (1 server per site) I have an issue with my Search Head. When I check the DMC I can see there are disk usage peaks sometimes and it immediatly goes down. For example, the last peak is today, started at 10:15 and it goes down to 13:45. Meanwhile I don't understand this peaks and where did the data came from ? I checked logs in Splunk but I have no clues. I don't know if I miust check it in Splunk or in Linux. Hope you can help me Splunkers, Regards
  if someone request to change the splunk installation folder: 1. Is it possible to just move and everything works normal? 2. what should I keep in mind? or exactly what should I do in a step by s... See more...
  if someone request to change the splunk installation folder: 1. Is it possible to just move and everything works normal? 2. what should I keep in mind? or exactly what should I do in a step by step?   the truth worries me a lot As I see it, it is as if they asked me to move a program in windows from C to a D partition and hope that it continues to work normally and obviously that will not happen
Hi, We are currently considering deploying a small Splunk Enterprise platform on AWS. Details: 10G/d of ingestion Less than 10 users The data is not "time-framed", as we collect data and sends ... See more...
Hi, We are currently considering deploying a small Splunk Enterprise platform on AWS. Details: 10G/d of ingestion Less than 10 users The data is not "time-framed", as we collect data and sends it from time to time. The platform will mostly be used for analysts' searches & queries. I've read about SmartStore, which is supposed to be cheaper, though slower. I also understand that it caches data/buckets mostly by time. How slower would it be, comparing to EBS storage, and would you suggest using it for warm data? Much Appreciated.
Hi guys,  Looking for a bit of help because I confused myself at this point and can't think logically I'm creating a search where I can show uploads vs downloads, and the criteria is as follow: ... See more...
Hi guys,  Looking for a bit of help because I confused myself at this point and can't think logically I'm creating a search where I can show uploads vs downloads, and the criteria is as follow: *if bytes_in (download) is more than 70% of all the bytes for that user, then the main action is data download and I want to add a column "alert" with a value "download" *if bytes_out (upload) is more than 70% of all the bytes for that user, then the main action is data upload and I want to add to the column "alert" that it's going to be an "upload"  *if the bytes in and our are within 20% of the even split, then the value in the "alert" column will be no action. if the split is even, then "no action".  Now, how do I do it in Splunk?? 
Hi All, We have the two roles setup in splunk and assigned them for a single user  using AD groups as mentioned below.We have applied srchFilter for role_abc.User is complaining that he is unable t... See more...
Hi All, We have the two roles setup in splunk and assigned them for a single user  using AD groups as mentioned below.We have applied srchFilter for role_abc.User is complaining that he is unable to see any logs for indexes mapped under role_xyz.I doubt that srchfilters under role_abc is causing this problem.How to relsove this issue and User should have access to all the indexes mapped according to their roles. Thank you.   [role_abc] accelerate_search = enabled cumulativeRTSrchJobsQuota = 50 edit_search_schedule_window = enabled export_results_is_visible = enabled get_metadata = enabled get_typeahead = enabled pattern_detect = enabled rest_properties_get = enabled rtSrchJobsQuota = 20 rtsearch = enabled schedule_search = enabled search = enabled srchDiskQuota = 200 srchFilter = index::rckspc OR (source::marketing-production OR source::http:marketing-staging) srchIndexesAllowed = hrk;rckspc srchIndexesDefault = hrk;rckspc [role_xyz] accelerate_search = enabled cumulativeRTSrchJobsQuota = 50 edit_search_schedule_window = enabled export_results_is_visible = enabled get_metadata = enabled get_typeahead = enabled pattern_detect = enabled rest_properties_get = enabled rtSrchJobsQuota = 5 rtsearch = enabled schedule_search = enabled search = enabled srchDiskQuota = 200 srchIndexesAllowed = os;windows;linux srchIndexesDefault = os;windows;linux   @isoutamo @rbal_splunk @gcusello @martin_mueller @Stephen_Sorkin @MLGSPLUNK @maciep @nickhills @FrankVl