All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all I am trying to generate web certificate for the below mentioned splunk domains. I would have done it for the single domain. But this is for multiple domains for the same host. can some one gu... See more...
Hi all I am trying to generate web certificate for the below mentioned splunk domains. I would have done it for the single domain. But this is for multiple domains for the same host. can some one guide me correctly.   .     splunk-test.zone.xx.com splunk-test.rest.zone.xx.com Regards,shivanand
Can my client automatically index his/her files in splunk server ? If yes, how?
Hi Team, We are working on a solution to monitor the utilization time of resources on their machines. We have about 1000 machines where Splunk forwarders are installed. These forwarders are pushing ... See more...
Hi Team, We are working on a solution to monitor the utilization time of resources on their machines. We have about 1000 machines where Splunk forwarders are installed. These forwarders are pushing data to a central splunk instance.  The issue that now we are facing is that the event codes are getting missed sometimes. For eg after an unlock(4801) there should be a lock(4800). But we are getting two simultaneous unlock event code(4801) without lock. This is sending our calculation of utilization time for a toss.   Below is the input stanza in the splunk forwarders. [WinEventLog://Security] checkpointInterval = 5 current_only = 0 disabled = 0 index = test_events start_from = oldest whitelist = 4624,4634,4800,4801   We are unable to figure out this issue for the past week. can someone pls help us out on this. Many Thanks, Naagaraj SV  
This is data file(<Interceptor>~~~</Interceptor><Interceptor>~~~</Interceptor>) <?xml version="1.0" encoding="UTF-8" ?><dataroot><Interceptor><AttackCoords>-80.33100097073213,25.10742916222947</Atta... See more...
This is data file(<Interceptor>~~~</Interceptor><Interceptor>~~~</Interceptor>) <?xml version="1.0" encoding="UTF-8" ?><dataroot><Interceptor><AttackCoords>-80.33100097073213,25.10742916222947</AttackCoords><Outcome>Interdiction</Outcome><Infiltrators>23</Infiltrators><Enforcer>Ironwood</Enforcer><ActionDate>2013-04-24</ActionDate><ActionTime>00:07:00</ActionTime><RecordNotes></RecordNotes><NumEscaped>0</NumEscaped><LaunchCoords>-80.23429525620114,24.08680387475695</LaunchCoords><AttackVessel>Rustic</AttackVessel></Interceptor> -> i want to this ↓ <Interceptor>~~~</Interceptor> <Interceptor>~~~</Interceptor> <Interceptor>~~~</Interceptor> i use LINE_BREAKER = <Interceptor> MUST_BREAK_AFTER = \</Interceptor\> but i can't do. What can I do? 
This is data file( ip -- [time] text &&& ip -- [time] text &&& ip -- [time] text &&&) 41.146.8.66 - - [13/Jan/2016 21:03:09:200] "POST /category.screen?category_id=SURPRISE&JSESSIONID=SD1SL2FF5ADFF3... See more...
This is data file( ip -- [time] text &&& ip -- [time] text &&& ip -- [time] text &&&) 41.146.8.66 - - [13/Jan/2016 21:03:09:200] "POST /category.screen?category_id=SURPRISE&JSESSIONID=SD1SL2FF5ADFF3 HTTP 1.1" 200 3496 "http://www.myflowershop.com/cart.do?action=view&itemId=EST-16&product_id=RP-SN-01" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.38 Safari/533.4" 294&&&130.253.37.97 - - [13/Jan/2016 21:03:09:185] "GET /category.screen?category_id=BOUQUETS&JSESSIONID=SD7SL2FF1ADFF8 HTTP 1.1" 200 2320 "http://www.myflowershop.com/cart.do?action=changequantity&itemId=EST-12&product_id=AV-CB-01" "Opera/9.20 (Windows NT 6.0; U; en)" 361&&&141.146.8.66 - - -> i want to this ↓ ip -- [time] text ip -- [time] text ip -- [time] text   What can I do? (use LINE_BREAKER, etc)
Hi, I also have the same issue  I want to trigger the alert like below. Once the alert starts, there should an alert. After a certain amount of time, say 4 hours, I should get another alert. I... See more...
Hi, I also have the same issue  I want to trigger the alert like below. Once the alert starts, there should an alert. After a certain amount of time, say 4 hours, I should get another alert. I should not get the alert in between. Did anyone reply to this or is there any solution like setting up the throttle? This post is related to this existing Community post: https://community.appdynamics.com/t5/Dashboards/Time-based-health-rules-not-possible/m-p/29930#M337
Hi, I would like to know if we change the status of incident on Splunk Phantom, can we automatically notify user?  
Is Splunk Enterprise or Splunk Ent. Security (ES) able to detect attacks by rogue systems or Artificially Intelligent enabled Server? Are AI enabled servers able to create user accounts in my Splunk ... See more...
Is Splunk Enterprise or Splunk Ent. Security (ES) able to detect attacks by rogue systems or Artificially Intelligent enabled Server? Are AI enabled servers able to create user accounts in my Splunk environment?
Hi Team,   I have 1 Deployer and 3 Search Heads. Where should I create the KV store? Should I create it in the deployer ??? and will that be deployed to the search heads? Or should I create it i... See more...
Hi Team,   I have 1 Deployer and 3 Search Heads. Where should I create the KV store? Should I create it in the deployer ??? and will that be deployed to the search heads? Or should I create it in any one of the Search heads?
Hi All, We have an issue with a number of our UFs where in they have stopped sending internal logs after a recent app update from Deployment server. UFs are still sending other events, there are no... See more...
Hi All, We have an issue with a number of our UFs where in they have stopped sending internal logs after a recent app update from Deployment server. UFs are still sending other events, there are no errors in the splunkd.log indicating why they are not sending internal logs and app changes from DS have nothing in them that should affect this (ie nothing like blacklist _internal etc). Have attempted restarting UF, but no change. Number of "other" events is not that high (one host is around 800 events for last 15mins) - so ruling out UF being overloaded and not sending Internal logs because of this. Looking for any advice or troubleshooting steps I can use to try and figure out why these clients are not longer sending internal logs. Thanks in advance.
Hello everyone I hope you guys are doing just great!   I have a sort of simple question but I have not been able to come up with a solution.. I want to be able to filter out rows of a table where t... See more...
Hello everyone I hope you guys are doing just great!   I have a sort of simple question but I have not been able to come up with a solution.. I want to be able to filter out rows of a table where there are multivalues based a numeric criteria, this is an example: I have this: AGENT INX ROCKS TASK XX_9 7 9 -6 T Y U TY-8 GY-0 FG-67 XX_10 7 -49 -66 UY IO UJI TY-8E G-0 VG-67   I would like to only remove all rows in the table where  the multivalue field "INX" have negative numbers and have something like this: AGENT INX ROCKS TASK XX_9 7 9 T Y TY-8 GY-0 XX_10 7 UY TY-8E   I have tried using mvfilter and mvfind and mvindex but... every trial has not been successful yet so I really love you guys for helping me out thanks a LOTTTT kindly, Cindy
Hi, My dashboard has a row-clickable table.  allassetstable.on("click", function(e) { // Bypass the default behavior e.preventDefault(); // Displays a data object in the con... See more...
Hi, My dashboard has a row-clickable table.  allassetstable.on("click", function(e) { // Bypass the default behavior e.preventDefault(); // Displays a data object in the console console.log("Clicked the table:", e.data); });  Now once I clicked the table, I got this from console:  Clicked the table: {click.name: "database_id", click.value: "dfd033abe230eb961276fd5981c209f11b7925ee8736474265196215bc0f8b7d", click.name2: "database_id", click.value2: "dfd033abe230eb961276fd5981c209f11b7925ee8736474265196215bc0f8b7d", row.database_id: "dfd033abe230eb961276fd5981c209f11b7925ee8736474265196215bc0f8b7d", …} click.name: "database_id" click.name2: "database_id" click.value: "dfd033abe230eb961276fd5981c209f11b7925ee8736474265196215bc0f8b7d" click.value2: "dfd033abe230eb961276fd5981c209f11b7925ee8736474265196215bc0f8b7d" earliest: "1621956660.000" latest: 1624819906 row.anomaly_count: "144" How should I modify the JS code to get the value stored in click.value instead of the entire data block? I tried e.data.click.value before but it didn't work out.  Thank you in advance!  Regards,
My source dashboard has a clickable table and once a user clicks a row the user will be taken to a destination dashboard and the token will be set to database_id which is then passed in to run a sear... See more...
My source dashboard has a clickable table and once a user clicks a row the user will be taken to a destination dashboard and the token will be set to database_id which is then passed in to run a search on the destination dashboard. Now both two dashboards are using a single JS file. But I realized that the token will not be successfully passed in to the search as the token value is not set when the destination dashboard reads the JS file. Could someone please provide help on how should I transfer the token value between two dashboards using SplunkJS? Thank you in advance! Here's the code for event handler on the source dashboard:   allassetstable.on("click", function(e) { // Bypass the default behavior e.preventDefault(); // set token name and value tokenSet.set("databaseID_tok", e.data.click.value); // link back to asset-specific anomalies page utils.redirect("Ano","_blank"); });     Below is the code for search on the destination dashboard:   var search2 = new SearchManager({ id: "ano", preview: true, cache: true, latest_time: "now", search: "index=anomalies database_id=$databaseID_tok$" }, {tokens: true});      
I have a search result like below: { [-]    dt: 2021-06-24T22:46:40.7013297Z    flds: [ [-]      { [-]        fn: username        nv: LearningRegApplication      }      { [-]        fn: dbQu... See more...
I have a search result like below: { [-]    dt: 2021-06-24T22:46:40.7013297Z    flds: [ [-]      { [-]        fn: username        nv: LearningRegApplication      }      { [-]        fn: dbQueries        nv: SQL_QUERIES=Select emp.fieldA, emp.fieldB, emp.fieldC, emp.fieldD from Template.table emp WHERE (UPPER(emp.fieldA) = UPPER(:emp.fieldA)) AND (UPPER(emp.fieldB) = UPPER(:emp_fieldB)) AND (UPPER(emp.fieldC) IN (UPPER('aaa'), UPPER('bbb'), UPPER('ccc'), UPPER('ddd')))      }    ]    sf: EmployeeLogic    sid: T1-SECURITY-{A-8FE-76E9-C3A2-ED890B}    sm: GetAsync    stat: Success    tid: yb6Any-PG00IG53 } I used   |SPATH OUTPUT=status PATH=stat | SPATH OUTPUT=nv PATH=flds{}.nv | SPATH OUTPUT=fn PATH=flds{}.fn | Table fn nv host status.  The results return 4 columns.  Each row has the array of the 2 sets of key-value pair in the fn/nv stacked together.   But host and status has only hostname and status values as follows: fn            nv              host             status (Row 1) username     LearningRegApplication                                                                                          MyHost  Success (Row 1) dbQueries     SQL_QUERIES=Select emp.fieldA, emp.fieldB, emp.fieldC, emp.fieldD from Template.table emp WHERE (UPPER(emp.fieldA) = UPPER(:emp.fieldA)) AND (UPPER(emp.fieldB) = UPPER(:emp_fieldB)) AND (UPPER(emp.fieldC) IN (UPPER('aaa'), UPPER('bbb'), UPPER('ccc'), UPPER('ddd')))   And I have uploaded a CSV file that contains all query fields name.  I want to see each field was queried by which application and insert them in that field row (not sure what is the best way to present this information).  So the output may look like something below: fieldA  LearningRegApplication fieldB  LearningRegApplication, AThirdApp fieldC  LearningRegApplication, ASecondApp fieldD  LearningRegApplication fieldE  ASecondApp, AThirdApp fieldF  AForthApp   :   :  What is the way to use the full CSV list of field names to insert the app to each SQL field name?
I was using AppD free trial, and then it converted to Lite. I was able to use the SAAS controller. After a month, I log in back and the controller (elephant202012290737558) has vanished. I see the fo... See more...
I was using AppD free trial, and then it converted to Lite. I was able to use the SAAS controller. After a month, I log in back and the controller (elephant202012290737558) has vanished. I see the following and i am not able to create a new controller or subscription. How can i get back the controller or create a new one?
Hi, Is it possible  from Splunk universal/heavy forwarder to forward data to third party REST API endpoint over https using basic authentication ? I have use case where Splunk universal/heavy forwa... See more...
Hi, Is it possible  from Splunk universal/heavy forwarder to forward data to third party REST API endpoint over https using basic authentication ? I have use case where Splunk universal/heavy forwarder has to forward data to Splunk enterprise + 3rd party client REST api endpoint for processing data. Is this use case possible ?    
Hello, ***this is my first time use of Splunk *** I have install the splunk and setup simple syslog udp on port 11514 it use to work great until last week , then It stop getting any data . I have disa... See more...
Hello, ***this is my first time use of Splunk *** I have install the splunk and setup simple syslog udp on port 11514 it use to work great until last week , then It stop getting any data . I have disable the "service" , and open a simpe python udp server (to see if I'm getting any data) , to see where is the problem(didn't change any setting on the sender dise) in the UDP server - I get the messages . shut down the python server , enable the Splunk - but still get nothing what could it be ? I'm using the free license (and it havn't been more then 30 days - if it's an issue ) Thanks ,
Hi,   I just installed the Splunk Security Analytics for AWS Content app.    almost immeditaly I received a lot of error message show conflicts  :  INFO pid=302591 tid=MainThread file=confcheck_... See more...
Hi,   I just installed the Splunk Security Analytics for AWS Content app.    almost immeditaly I received a lot of error message show conflicts  :  INFO pid=302591 tid=MainThread file=confcheck_es_correlationmigration.py:run:115 | Processing duplicate: ESCU - AWS Investigate User Activities By AccessKeyId - Response TaskINFO pid=302591 tid=MainThread file=confcheck_es_correlationmigration.py:run:140 | Unrelocatable duplicate stanza found: stanza="ESCU - AWS Investigate User Activities By AccessKeyId - Response Task" conf_type="savedsearches" apps="DA-ESS_AmazonWebServices_Content,DA-ESS-ContentUpdate" Is all content in DA-ESS_AmazonWebServices_Content already in DA-ESS-ContentUpdate ?    
trying to display two timecharts together, to make it easy to spot the time when no response received for the request sent. the search looks like this index="production   "sending request"  | dedup... See more...
trying to display two timecharts together, to make it easy to spot the time when no response received for the request sent. the search looks like this index="production   "sending request"  | dedup requestId | timechart span=1h sum(count) as count1 | append [search index="production"  "response received" | dedup requestId | timechart span=1h sum(count) as count2] this is not showing the two columns (count1 and count2) I was hoping to see,  appendcol doesn't work either .   I tried just using just count index="production   "sending request"  | dedup requestId | timechart span=1h count | append [search index="production"  "response received" | dedup requestId | timechart span=1h count] this shows only one column called count.   Any help is greatly appreciated!
How to plot http error % as timeseries? (when I add _time or timeseries count Iam getting DAG: Execution exception (search cancelled)). Query: index=abc source=efg cdn=pqr "message.reqHost"="*xyz*"... See more...
How to plot http error % as timeseries? (when I add _time or timeseries count Iam getting DAG: Execution exception (search cancelled)). Query: index=abc source=efg cdn=pqr "message.reqHost"="*xyz*" earliest=-15m latest=now | eventstats count As Total | eval Status=case(like(dm_status,"4%"),"HTTPERR", like(dm_status,"5%"),"HTTPERR") | stats count AS ERR_Count by Status, Total | eval ERR_PERCENTAGE=round(((ERR_Count)/Total)*100,2) | eval ERR_PERCENTAGE = if(isnull(ERR_PERCENTAGE),"0",ERR_PERCENTAGE)