All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Is it possible to only allow REST API access with token authentication and not username:password? Is there a config to allow certain roles to be able to access the REST API?
I am importing signin logs from azure and I want to built a query which should take input from a csv file (appid) and search logs and display output for number of success and failures of signins pe... See more...
I am importing signin logs from azure and I want to built a query which should take input from a csv file (appid) and search logs and display output for number of success and failures of signins per app
I have created a job to login on a specific web page, it runs perfectly on my local machine, although when I set this job to run on appdynamics synthetic jobs I got an error. Sometimes I receive a ti... See more...
I have created a job to login on a specific web page, it runs perfectly on my local machine, although when I set this job to run on appdynamics synthetic jobs I got an error. Sometimes I receive a timeout error and sometimes I got a "Unable to locate element". I have already tried different timeout periods and many types of element selectors (xpath, css selector, ID ...), locally every selector works fine, the problem happens every time I try to run it on appdynamics...whatelse should I try? 
I am trying to get the total count of a field called ID for earliest and latest time for a particular time range. Assume I am looking for a time range of Mor 8AM to 5PM . I want the count of total fo... See more...
I am trying to get the total count of a field called ID for earliest and latest time for a particular time range. Assume I am looking for a time range of Mor 8AM to 5PM . I want the count of total for a field called "ID" for 8AM TO 9AM and also count from 4PM TO 5PM for field called 'ID" and show what is different if there is a difference in values of ID for hours 8AM TO 9AM and 4PM TO 5PM .   Following is the query I am using index=test | rename "results{}.id" as "id" | bin _time span=1h | stats count(id) as total by _time | delta total as difference | fillnull value=0 |eval status=case(difference=0, "No change", difference<0, "Device(s) Removed" , difference>0 ,"Device(s) Added") | search status!="No change" | rename _time as time | eval time=strftime(time,"%m/%d/%y %H:%M:%S")
Hello everyone. I'm fairly new to Splunk, I've recently joined a job as a security analist in a SOC where I get to use this cool tool. This question is kind of a continuation to my previos post: ht... See more...
Hello everyone. I'm fairly new to Splunk, I've recently joined a job as a security analist in a SOC where I get to use this cool tool. This question is kind of a continuation to my previos post: https://community.splunk.com/t5/Splunk-Search/Help-on-query-to-filter-incoming-traffic-to-a-firewall/m-p/599607/highlight/true#M208701 I had to make a query to do two things: First, look for any potential policy with any ports enabled. Second, find out which of these policies were allowing or teardowning request coming from public IP addresses. For this I came up with this query which does the work imo:   index="sourcedb" sourcetype=fgt_traffic host="external_firewall_ip" action!=blocked | eventstats dc(dstport) as different_ports by policyid | where different_ports>=5 | eval source_ip=if(cidrmatch("10.0.0.0/8", src) OR cidrmatch("192.168.0.0/16", src) OR cidrmatch("172.16.0.0/12", src),"private","public") | where source_ip="public" | eval policy=if(isnull(policyname),policyid,policyid+" - "+policyname) | eval port_list=if(proto=6,"tcp",if(proto=17,"udp","proto"+proto))+"/"+dstport | dedup port_list | table source policy different_ports port_list | mvcombine delim=", " port_list   However, the problem I'm having is that the port list is being shown like if it was one big list, like this: 1 2 3 4 5 I'd like for it to show like this: 1, 2, 3, 4, 5 I've also tried replacing the table command with a stats delim=", " value(port_list) but I've had no success. I'd appreciate if you could give me some insight on how could I solve this, I had in mind trying mvjoin but had no clue on how to approach it. Thanks in advance.
Present scenario:  We have alert " high memory "  detect systems if memory hits the set threshold ( if Committed Memory usage is over 115% ) - Running on schedule every hour at 15 minutes past hour  ... See more...
Present scenario:  We have alert " high memory "  detect systems if memory hits the set threshold ( if Committed Memory usage is over 115% ) - Running on schedule every hour at 15 minutes past hour  Requirement :  This alert give us kind of live result but We need historic data in terms of report so that we can do review at the end of month to understand Host "Alpha" caught into this alert for x times in month , Host "beta" caught into this alert for y times ...so on. Basically need to find particular host how frequently and how many times having high memory.   | mstats avg(_value) AS CommittedMemoryInBytes WHERE index=xyz AND metric_name=Memory.Committed_Bytes by host | join host [search index=abc sourcetype=WHM source=operatingsystem TotalPhysicalMemoryKB=*] | eval PercentCommittedMemory = round( (CommittedMemoryInBytes*pow(2,-30)) / (TotalPhysicalMemoryKB*pow(2,-20) )*100,2) | where PercentCommittedMemory > 115 | table host,PercentCommittedMemory,CommittedMemoryInBytes,TotalPhysicalMemoryKB     Any help will be appreciated.
We had instances where dashboard was not updating. I would like to create an alert if dashboard/panels are not updating. How do i achieve this. Please help
This app is not supported for Splunk Cloud because the jquery versions. Does anyone know if this will be updated or if this app is not longer supported by the contributors from Splunk Works?
I recently added a new SH to our SHC.  Show shcluster-status is good, show kvstore-status is good.  I created some kvstore entries on SH1 and they replicated to SH5 (the new one).  I also see the res... See more...
I recently added a new SH to our SHC.  Show shcluster-status is good, show kvstore-status is good.  I created some kvstore entries on SH1 and they replicated to SH5 (the new one).  I also see the results of scheduled searches that are writting to the kvstore(s) being replicated to the new SH.   I see no errors in splunkd.log.  However, when ever the DMC tries to access /services/server/introspection/kvstore/collectionstats for the "KV Store: Instance" panel, it errors with  "The Rest request on the endpoint URI /services/server/introspection/kvstore/collectionstats?count=0 returned HTTP 'status not OK': code=500 internal server error'" I can hit the other /kvstore APIs just fine, .../kvstore/replicasetstats and .../kvstore/serverstats. If i run "| rest splunk_server=local /services/server/introspection/kvstore/collectionstats" on the new search head in a search bar I get the same error. This is version 8.2.0 and the kvstore is using the wiredTiger storage engine. Any suggestions?   TIA
Dear all, I have below table, FieldA (String) FieldB to FieldE (Numeric). Can we base on the FieldA value to change color for FieldB to FieldE? logic will  below 1. update cell for Field B... See more...
Dear all, I have below table, FieldA (String) FieldB to FieldE (Numeric). Can we base on the FieldA value to change color for FieldB to FieldE? logic will  below 1. update cell for Field B as red when (FieldA = "1111" or "3333" or "4444") and FieldB cell value >0 2. update cell for Field C as red when (FieldA = "2222" or "3333" ) and FieldC cell value >0 3. update cell for Field D as red when (FieldA = "2222" or "3333" ) and FieldD cell value >0 4. update cell for Field E as red when (FieldA = "4444" ) and FieldE cell value >0    
I execute a search with this ...   index=foo sourcetype=wineventlog field=value ...   In the search.log I am seeing a line that says ...   INFO SearchEvaluatorBasedExpander - sourcetype... See more...
I execute a search with this ...   index=foo sourcetype=wineventlog field=value ...   In the search.log I am seeing a line that says ...   INFO SearchEvaluatorBasedExpander - sourcetype expansions took 32 ms   and after that I see ...   INFO UnifiedSearch - Expanded index search = (index=foo sourcetype=wineventlog OR sourcetype=WinEventLog:Application OR sourcetype=WinEventLog:DFS-Replication OR sourcetype=WinEventLog:DNS-Server OR sourcetype=WinEventLog:Directory-Service OR sourcetype=WinEventLog:File-Replication-Service OR sourcetype=WinEventLog:Key-Management-Service ...   Is there a way to not do expansion of sourcetype? It still works, but it is encompassing more data than needs to be searched over and is inefficient.
Hello, I am facing an issue while I try reading from Rest API Splunk Aggregated info. A query that uses the calculation below is able to provide 4 columns via UI but not via ADF Rest API where I ... See more...
Hello, I am facing an issue while I try reading from Rest API Splunk Aggregated info. A query that uses the calculation below is able to provide 4 columns via UI but not via ADF Rest API where I get only the Total result. Seems to me like the issue is with the grouped data which can not be read for some reason. Any suggestion please? | eval Days=(relative_time(now(), "@month+28d")-patchLevelDate)/86400 | where time>relative_time(now(), "-30d") | eval system="2. VDI Persistent" | eval compliant=if(Days<70, "Yes", "No")] | chart count(host) by system compliant | addtotals     
How to compare difference in the json file. If there is no difference we are good. But in my case i need to find compare N_aaa and A_aaa and find out the difference  N_aaa A_aaa { "AAA": { "m... See more...
How to compare difference in the json file. If there is no difference we are good. But in my case i need to find compare N_aaa and A_aaa and find out the difference  N_aaa A_aaa { "AAA": { "modified_files": [ "a/D:\\\\splunk\\\\Repos\\\\Wed\\\\N_aaa/aaa/pack-672b2efd6aada12ecfc8d1745f805706f43902f4.idx", "a/D:\\\\splunk\\\\Repos\\\\Wed\\\\N_aaa/aaa/pack-672b2efd6aada12ecfc8d1745f805706f43902f4.pack", "a/D:\\\\splunk\\\\Repos\\\\Wed\\\\A_aaa/aaa/objects/pack/pack-8a069e643d668a0715f82a237b44f1554535719f.idx", "a/D:\\\\splunk\\\\Repos\\\\Wed\\\\A_aaa/aaa/objects/pack/pack-8a069e643d668a0715f82a237b44f1554535719f.pack" ] } }
I have a table like this:           I would like to propagate "start" value and "end" value if "_time>=start AND _time<end". It's like a "transaction" with "startwith and endwith", but I n... See more...
I have a table like this:           I would like to propagate "start" value and "end" value if "_time>=start AND _time<end". It's like a "transaction" with "startwith and endwith", but I need to use "streamstats", because I can't lost event details. So I would like to obtain:             Thanks
Hi I have table like below, each word is parameter of a search query, now want to know which  of them mostly use? SPL | table a b c d e f    FYI: some these field are empty, some of them partia... See more...
Hi I have table like below, each word is parameter of a search query, now want to know which  of them mostly use? SPL | table a b c d e f    FYI: some these field are empty, some of them partially like each other. Need to find most use pattern on this table. Any idea? Thanks
I am in Splunk Enterprise trying to create a Dashboard in the source code. When I input the below code it says on the UI "Unable to create search" in regards to the User: All section Is this a us... See more...
I am in Splunk Enterprise trying to create a Dashboard in the source code. When I input the below code it says on the UI "Unable to create search" in regards to the User: All section Is this a user role restriction preventing me from searching all users or something else? It does not have any errors in the edit source page. Below Code: <form theme="dark"> <label>Splunk Search Activity</label> <fieldset submitButton="true" autoRun="false"> <input type="time" token="time1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="radio" token="exclude1" searchWhenChanged="true"> <label>Splunk System User</label> <choice value="user!=splunk-system-user">exclude</choice> <choice value="*">include</choice> <default>user!=splunk-system-user</default> <initialValue>user!=splunk-system-user</initialValue> </input> <input type="multiselect" token="user1"> <label>User:</label> <fieldForLabel>user1</fieldForLabel> <fieldForValue>user</fieldForValue> <search> <query>index=_audit action=search search!="'typeahead*" $exclude1$ | stats count by user</query> <earliest>$time1.earliest$</earliest> <latest>$time1.latest$</latest> </search> <choice value="*">all</choice> <default>*</default> <initialValue>*</initialValue> <delimiter> </delimiter> </input> <input type="text" token="filter1"> <label>Search Filter:</label> <default>*</default> <initialValue>*</initialValue> <prefix>"*</prefix> <suffix>*"</suffix> </input> </fieldset> <row> <panel> <table> <search> <query>index=_audit action=search search!="'typeahead*" user="$user1$" search=$filter1$ $exclude1$ | stats count by _time user search total_run_time search_id app event_count | sort -_time</query> <earliest>$time1.earliest$</earliest> <latest>$time1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>
Hi All, I have a multi-value field as shown below- _time                                      field_test 2022-05-13 04:36:00 test_data_1   test_data_2   test_data_3   test_data_... See more...
Hi All, I have a multi-value field as shown below- _time                                      field_test 2022-05-13 04:36:00 test_data_1   test_data_2   test_data_3   test_data_4 2022-05-13 03:30:00    test_data_9   test_data_10   test_data_3   test_data_4   For the above two events, I am trying to write a query which can provide me the common values such that result is- test_data_3 test_data_4   Please help me on how can I accomplish it?    
One problem that I have with alerting from Splunk is that when I alert by email, total width of the table can exceed what the recipient can handle lookin at.  I'd like to start transposing my result ... See more...
One problem that I have with alerting from Splunk is that when I alert by email, total width of the table can exceed what the recipient can handle lookin at.  I'd like to start transposing my result table to address this.   That is, I'd like to go from sending alerted results like this time field1 field2 field 3 5/31/2022 value1 value2 really long value 3, so long that it creates a formatting problem. Oh noes! What will I do? To something more like this: Time: 5/31/2022 field1: value1 field2: values2 field3: really long value 3, so long that it creates a formatting problem. Oh noes! What will I do?   I know that I could create a field name called "alert fields" and manually create the fields, but is there a simple way to do this in Splunk
Hi, We have a tier related to a java process. The point is that this tier is related to a batch process. However, it is recognized as a normal process and it is not unregistered after a time altho... See more...
Hi, We have a tier related to a java process. The point is that this tier is related to a batch process. However, it is recognized as a normal process and it is not unregistered after a time although it finished its function time before. So, is there any way to control the time in which a node can be unregistered automatically if it doesn't have any traffic for a time? Thanks, Carlos
Hello, I'm facing a problem with role restriciton in searchs.  I applied the restriction in the role and everything was working perfect, even with searchs in datamodel. However, when I accelera... See more...
Hello, I'm facing a problem with role restriciton in searchs.  I applied the restriction in the role and everything was working perfect, even with searchs in datamodel. However, when I accelerated my datamodel, the role restriction filters stopped working. I'm imaging that it was due to the tsidx files generated by acceleration.  How can I apply such restriction even in accelerated datamodels?  Thanks!