All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, We upgraded from Splunk 7.3.5 to 8.1.1.     Most of the app/add-on works except one obsoleted one that supports Python2.7 only.   Got error during setup:   01-05-2021 14:26:55.771 +0800 ER... See more...
Hi all, We upgraded from Splunk 7.3.5 to 8.1.1.     Most of the app/add-on works except one obsoleted one that supports Python2.7 only.   Got error during setup:   01-05-2021 14:26:55.771 +0800 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': The script at path=/opt/splunk/etc/apps/TA-ms-loganalytics/bin/TA_ms_loganalytics_rh_log_analytics.py has thrown an exception=Traceback (most recent call last): 01-05-2021 14:26:55.771 +0800 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python3.7 /opt/splunk/bin/runScript.py setup': File "/opt/splunk/etc/apps/TA-ms-loganalytics/bin/ta_ms_loganalytics/splunktaucclib/rest_handler/endpoint/validator.py", line 389     As Splunk 8 includes Python 2.7 runtime and binary, possible to tell Splunk to use Python2.7 when handling this app ?   Thanks a lot. Regards /ST Wong
 [ind2] HomePath = $SPLUNK_DB/ind2/db ColdPath = $SPLUNK_DB/ind2/colddb ThawedPath = $SPLUNK_DB/ind2/thaweddb MaxHotBuckets = 10 MaxDataSize = 10000 MaxWarmDBCount = 300 MaxTotalDataSizeMB = 5... See more...
 [ind2] HomePath = $SPLUNK_DB/ind2/db ColdPath = $SPLUNK_DB/ind2/colddb ThawedPath = $SPLUNK_DB/ind2/thaweddb MaxHotBuckets = 10 MaxDataSize = 10000 MaxWarmDBCount = 300 MaxTotalDataSizeMB = 50000 FrozenTimePeriodInSecs = 31536000   ColdToFrozenDir = $SPLUNK_DB/ind2/frozendb   In this config how long data will reside in each bucket as per my understanding after 10gb data in hot ll roll to warm and when 300 bucket in warm ll full that ll roll to cold  or 10 hot bucket each bucket in hot ll take 10gb data than it ll roll to warm is there aany one who can clarify??? 
when l try to download splunk enterprise software, i am getting pop-up that you are already  logged in and please wait momentarily. after that i see same page with download link. I tried multiple tim... See more...
when l try to download splunk enterprise software, i am getting pop-up that you are already  logged in and please wait momentarily. after that i see same page with download link. I tried multiple times and still i could not download the software
Everytime when i configure do Index clustering the peer nodes work fine. When next day I try to open my peer nodes web server will not be loading. Please anyone help me on this?
Looking for new resources to transition from ArcSight to Splunk please. The resources found on Micro Focus site are very old. Links & docs are much appreciated. If you have done this before any Do's ... See more...
Looking for new resources to transition from ArcSight to Splunk please. The resources found on Micro Focus site are very old. Links & docs are much appreciated. If you have done this before any Do's & Don't are welcomed. Thank u
I have a dropdown(say field A) as input to a dashboard. And this dropdown value is passed/used only in certain panels in the dashboard. The other panels(where this field value is not used) should b... See more...
I have a dropdown(say field A) as input to a dashboard. And this dropdown value is passed/used only in certain panels in the dashboard. The other panels(where this field value is not used) should be displayed as normal when the dropdown value is "*" But when the dropdown value other than "*" is selected,then these others panels should be displayed in grey color.. Note : the panels where the dropdown values is not used are single value charts,and they are displayed in green,amber and red depending on the values it display..But when the dropdown value other than "*" is selected,it should be greyed out. Kindly help  
Hi I have a simple dashboard that is running the javascript on a load of the page. The idea is I have a button that will do an action when I press the button. But the javascript is running when I ... See more...
Hi I have a simple dashboard that is running the javascript on a load of the page. The idea is I have a button that will do an action when I press the button. But the javascript is running when I load the page, the button does work when you press it but the issue is it is now pressed twice not once. Any help would be great, cheers.     <dashboard script="run_command_from_button.js"> <label>Sync Configuration</label> <row> <panel > <title>The Below button will sync any changes from the Search head to the indexers, this can take 2 minutes to full sync</title> <html> <div> <input id="btn-submit" type="button" class="btn btn-primary" value="Click to Sync"/> </div> </html> </panel> </row> </dashboard>     The javascript     require([ "jquery", "splunkjs/mvc/searchmanager", "splunkjs/mvc/simplexml/ready!" ], function( $, SearchManager ) { var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: "| runshellscript Test_runshellscript.sh 1 1 1 1 1 1 1 1" }); $("#btn-submit").on("click", function (){ var ok = confirm("Push New Configuration out ot the Indexers?"); if ( ok==true ){ mysearch.startSearch(); } else { alert('New Conf Not Pushed'); } }); });                
I'm trying to add attributes via Phantom MISP app. Adding attributes itself works fine for me when I'm just using predefined fields for specific values like 'email-dst', but I need to include also 'c... See more...
I'm trying to add attributes via Phantom MISP app. Adding attributes itself works fine for me when I'm just using predefined fields for specific values like 'email-dst', but I need to include also 'comment' for the attributes I'm adding. So I decided to use 'json' field of the GUI configuration, which should allow me to pass custom built attributes. And here the first issues appeared. App documentation does not give an example how mentioned json string should look like when adding custom attributes. Initially I was basing on Automation and MISP API · User guide of MISP Malware Information Sharing Platform, a Threat Sharing Platform. (circl.lu), where json string was like this:  {"event_id":"3542","value":"1.2.3.4","category":"Network activity","type":"ip-dst"} But unfortunately this one was not working - no attribute was added. Via trial and error method I was able to discover that I can add attributes of defined type with following json, which is extremely different than the one mentioned in MISP API documentation: {"email-dst":"test@email.com,"}. And please note that this comma at the end is not a typo - without it, no attribute is added. I have no idea why it's working this way, but it allows me to add an attribute to an event. However this is where I got stuck. I have no idea how to include comment field for such attribute. I've tried several combinations containing {"comment":"abc"} but then I receive 3 attributes of 'comment' type with values - 'a', 'b', 'c'. Does maybe someone know how to add attributes with comment using Phantom MISP app?
Hi, iam stuck with a problem where i need help from you guys. I have a search that runs IDs against a lookup to determine if that ID is from Production or Test Environment. Problem is: some IDs are ... See more...
Hi, iam stuck with a problem where i need help from you guys. I have a search that runs IDs against a lookup to determine if that ID is from Production or Test Environment. Problem is: some IDs are double, means they exist in Prod and in Test. So when I search and hit one of those IDs they count against Prod and against Test. So i wanted to filter for two conditions that must be met - First would be the specific ID and second would be the environment. I cant get that to work, is there any way to select two conditions that must be met before the lookup give back an result? I appreciate any feedback
Hi everyone, I'm kind of stuck here and need the help of people that have more knowledge than me :). Currently I have build a (rather) advanced search of which the result is basicly a report of tha... See more...
Hi everyone, I'm kind of stuck here and need the help of people that have more knowledge than me :). Currently I have build a (rather) advanced search of which the result is basicly a report of that passed some validation rules. To keep it simple the result of the search will tell the end-users which orders they need to reapply because of missing data. The plan is to schedule this report daily and send the list of incorrect orders to the correct people. However there are some requests in how this information should be visible. For this I have been looking in to the reporting module, to give access to the report URL and provide the results. Main issue with this is that reports will always run and send an alert even when there are no results. I'm looking for a way to make it so that the email of the report is not being when there are no results. Is this in any way possible? I know this can be achieved via an alert, which can also contain a link to the results of the alert. The main issue with this solution is that the search head becomes visible when redirecting to this link. Since people will be accessing the results that do not have a lot of Splunk knowledge themselves, I don't want them to have any insights in the search or be able to change anything in the results. Since this will just cause confusion. Would this be possible to achieve via a custom python script? And if so, is there any standard available? Or are there "better" ways to do so?
Hello, I wrote an ansible playbook that installs and configures splunk UF. I have created two playbooks, one for linux machines and one for windows. When I run the windows playbook it uses this co... See more...
Hello, I wrote an ansible playbook that installs and configures splunk UF. I have created two playbooks, one for linux machines and one for windows. When I run the windows playbook it uses this comman to install the msi file: msiexec.exe /i splunkuniversalforwarder.msi [<flag>=<value>]...[<flagN>=<value>] I was able to set an indexer and a deployment server and to choose which of the windows eventlog inputs to recive data from and so...  the only problem I have is that all the data is sent by default to the "main" index. I cant seem to find a flag that will  send the data at first to a different index. I know I can change it in the inputs.conf file but the idea is to do all the configuration from the playbook without needing to change things later. I want to send all the data from the eventlog to one index, can i set it in the installation command? if not, where can i set it so it won't send any data to the "main" index? Thanks!! omer shira
Suppose I have two sets of data: Workers, who have attributes such as location, pay grade, role, department, skills. Roles, which have attributes such as location, pay grade, role, department. I... See more...
Suppose I have two sets of data: Workers, who have attributes such as location, pay grade, role, department, skills. Roles, which have attributes such as location, pay grade, role, department. If I also have a mapping of which workers have been assigned which roles in the past, including an attribute of whether the assignment was considered a success or a failure, could I use the past data to train a model and assign some predicted success/failure score to each possible worker/role combination? Note that it wouldn't be necessary for a worker and role to have attributes which are exact matches, but I might expect a model to identify combinations which have been successful in the past (E.g. a worker was in the IT department, and was successfully matched with a role in the Security department), and learn from that. I have looked at the documentation for the MLTK Showcase Examples and I'm not sure that any of the examples closely match what I would like to achieve. Does this sound feasible? Thanks.
Has anyone had success in creating monitoring in Splunk to detect broken data feeds. We have hundreds of feeds - some have events constantly coming in, some get data just once a month - and everythi... See more...
Has anyone had success in creating monitoring in Splunk to detect broken data feeds. We have hundreds of feeds - some have events constantly coming in, some get data just once a month - and everything in between! We require a solution which would be smart enough to detect the pattern of events and alert when there is an anomaly. An anomaly might be a drastic reduction in events or a source which was previously sending events stopping The best we've got so far is a simple alert that fires when a sourcetype has not had data for a fixed amount of time.
I'm trying to compare multiplevalue fields in a search. My query is below:     sourcetype=app2_log OR sourcetype=app1_log | stats values(App1_Login_Time) as App1_Login_Time values (App2_Login_Tim... See more...
I'm trying to compare multiplevalue fields in a search. My query is below:     sourcetype=app2_log OR sourcetype=app1_log | stats values(App1_Login_Time) as App1_Login_Time values (App2_Login_Time) as App2_Login_Time by User | eval res = if(App1_Login_Time > App2_Login_Time, "illegal activity", "ok")     So the output for above query is below. User App1_Login_Time App2_Login_Time Res user1 08:41:33 08:55:20 ok user2 08:43:00 09:01:18 ok user3 08:40:25 08:10:30 08:20:12 08:30:15 ok user4 08:30:20 08:10:05 illegal activity user5 09:35:20 09:50:00 ok    As you can see, the query check clients login time for both application. If clients are logged in to "app2" before "app1", it will be an illegal activity for my case. My queries output is working for user1,2,4 and 5. User4 logged in to app2 before application1 so in res column it says 'illegal activity'. For user3, it also logged in to app2 before app1 but in res column it says 'ok'. There are many users and I checked most of them. When there are multiple values for an application login time, the query can't compare and give true result. The result for user3 should be like this: User App1_Login_Time App2_Login_Time Res user1 08:41:33 08:55:20 ok user2 08:43:00 09:01:18 ok user3 08:40:25 08:10:30 08:20:12 08:30:15 illegal activity illegal activity illegal activity user4 08:30:20 08:10:05 illegal activity user5 09:35:20 09:50:00 ok    So with this output, I need to get 3 "illegal activity" outputs for user3. Also There can be a case where user can log in to "app2" before "app1" 1 time which should considered as "illegal activity" and after that log in to "app1" again and after that "app2" which should considered as "ok".  user 6 08:30:20 09:15:00 08:15:10 09:30:00 illegal activity ok Also I need to count the number of illegal activity for specific user.   I couldn't make splunk to do this comparison. Any ideas?   EDIT: There is also a scenario that users login to app1 but they may not login to app2 which is consideres as 'ok' in my case User App1_Login_Time App2_Login_Time Res user01 08:30:00   ok   Thank you.  
Can anyone please share some best practise or your own preferred method for populating the watchlist field in the assets and identities lookup table in ES ? We are currently using Sailpoint data to ... See more...
Can anyone please share some best practise or your own preferred method for populating the watchlist field in the assets and identities lookup table in ES ? We are currently using Sailpoint data to populate the identities lookup. The only one reference i have got is someone using below logic by leveraging the ldapsearch command. | eval watchlist=if((userAccountControl % 4)>=2,"true","")
After accelerating the CIM Validation (S.o.S.) DM and upon checking the pivot for any of the datasets results in an error. Example below:   Datamodel 'Splunk_CIM_Validation.Authentication' had an i... See more...
After accelerating the CIM Validation (S.o.S.) DM and upon checking the pivot for any of the datasets results in an error. Example below:   Datamodel 'Splunk_CIM_Validation.Authentication' had an invalid search, cannot get indexes to search    Upon checking the search.log, it states   ERROR DataModelEvaluator [3485 BatchSearch] - Data model 'Authentication' was not found. 01-04-2021 12:56:11.393 ERROR SearchOperator:datamodel [3485 BatchSearch] - Error in 'DataModelEvaluator': Data model 'Authentication' was not found. 01-04-2021 12:56:11.394 ERROR TsidxStats [3485 BatchSearch] - Error in 'SearchOperator:datamodel': Error in 'DataModelEvaluator': Data model 'Authentication' was not found. 01-04-2021 12:56:11.394 ERROR TsidxStats [3485 BatchSearch] - sid:etc.splunkcloud.com Datamodel 'Splunk_CIM_Validation.Authentication' had an invalid search, cannot get indexes to search   Update: I found some similar posts, where they mention this might be due to permission issue, but I have checked the permission for this DM and it is default to read for "Everyone". Other DMs with same permissions work well. Also, when acceleration is disabled, it seems shows data in pivot Can someone please help fix this ?
Hello  I am having a single instance of Splunk enterprise on my environment ,Is there a way to forward the Splunk data to other SIEM product on required basis. Could you please help us to provide t... See more...
Hello  I am having a single instance of Splunk enterprise on my environment ,Is there a way to forward the Splunk data to other SIEM product on required basis. Could you please help us to provide the details to procedure on this.
index="*" sourcetype="*" and I have field name with tag and it's generating 80% of events , how can I check why it's not generating 100% of events and where it's missing.  Also , Is there any regex ... See more...
index="*" sourcetype="*" and I have field name with tag and it's generating 80% of events , how can I check why it's not generating 100% of events and where it's missing.  Also , Is there any regex / resi api / rex query to get what are all the fields generating 100% events , and if it's <100% whe
Hi, I have just installed Splunk enterprise on-prem and trying to send data using HEC (port 8088). When I do a tcpdump, I do see packets coming in to splunk, but when I do search (  a basic search a... See more...
Hi, I have just installed Splunk enterprise on-prem and trying to send data using HEC (port 8088). When I do a tcpdump, I do see packets coming in to splunk, but when I do search (  a basic search all using "*"), I do  not see anything. Is there anything basic I might be missing? Thanks, Raghu tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on enp7s0, link-type EN10MB (Ethernet), capture size 262144 bytes 01:24:01.349607 IP  ip.25648 > worker1.radan-http: Flags [P.], seq 1131684074:1131684105, ack 2035586096, win 502, options [nop,nop,TS val 2515323145 ecr 929343968], length 31 01:24:01.349652 IP ip..25648 > worker1.radan-http: Flags [F.], seq 31, ack 1, win 502, options [nop,nop,TS val 2515323145 ecr 929343968], length 0 01:24:01.349774 IP worker1.radan-http > ip.25648: Flags [P.], seq 1:32, ack 32, win 1475, options [nop,nop,TS val 929350932 ecr 2515323145], length 31    
I have tried below sample search for acceleration but they are not allowed.   index="_internal"| table host source index=_internal|streamstats sum(bytes) as byte by sourcetype|stats count by host ... See more...
I have tried below sample search for acceleration but they are not allowed.   index="_internal"| table host source index=_internal|streamstats sum(bytes) as byte by sourcetype|stats count by host source byte   as per splunk docs, it should contain transforming commands and I hope table is transforming commands. so first query  should be able to accelerate but it won't accelerate. Can you please help me to understand why those searches are not allowed to be accelerated? Thanks