All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a dashboard which populates the results of a query in a table form. The results of this table is sometimes not feasible as the job of the query pops out a "truncation of data" message... See more...
Hello, I have a dashboard which populates the results of a query in a table form. The results of this table is sometimes not feasible as the job of the query pops out a "truncation of data" message. I would like to know if there is somehow in which I can pop out the warning message of the job in the same dashboard to not have to drilldown in the query and look for error messages in the job. I know it is possible to configure with token $job.message$ but I do not know what i have to do to modify the script: how and what exactly i have to write to set the token in the original query and how I can populate the data in another panel. Many thanks, Jaime
Hello, I have following issue: I have VPN GW used to remote connecting of users, this GW sends log to Splunk. I would like to have in Splunk list of currently logged in VPN users, which would be us... See more...
Hello, I have following issue: I have VPN GW used to remote connecting of users, this GW sends log to Splunk. I would like to have in Splunk list of currently logged in VPN users, which would be used for some purposes (I need this list to be as current as possible, lets say that username should be added/removed to/from this list no more than 5 - 10 minutes after logging in/logged out of user). My idea is create dynamic lookup based on LOGIN and LOGOUT messages from VPN GW. What I mean exactly: When LOGIN message for particular user (lets say user "USERA") appears, username is extracted and added to dynamic lookup "vpn_active_users.csv". When LOGOUT message for the same user ("USERA") appears, username is extracted and removed from dynamic lookup "vpn_active_users.csv". I know how to create dynamic lookup and append usernames on it, but I did not find a way how to remove previously added username from it. Is there any way how to do it? Or my approach is completely wrong? Any hint would be highly appreciate. Many thanks. Regards Lukas
Hi Guys, I'm facing a problem when restarting/starting splunk in windows,i get this message that the port is already bound,but i want to use the default ports.Please help!!!. Majority of the times ... See more...
Hi Guys, I'm facing a problem when restarting/starting splunk in windows,i get this message that the port is already bound,but i want to use the default ports.Please help!!!. Majority of the times i'm facing this issue on mgmt port(8089) and kvstore port(8191). After i had stopped splunk and started it ,here im seeing the mgmt port is bound. I tried killing the PIDs running on the port...but im not able to  free the ports. I'm not able to kill few Process ,says "no running instance of the task" ,though i can see the PIDs exists. ERROR: The process with PID 7636 could not be terminated. Reason: There is no running instance of the task.    Can you please suggest me on what should be done? Thanks in advance!
Good Morning Team, We recently installed Splunk Enterprise Security Suite and am configuring the settings.  I had orphaned searches which were fixed.  Now I am getting the below error. I have includ... See more...
Good Morning Team, We recently installed Splunk Enterprise Security Suite and am configuring the settings.  I had orphaned searches which were fixed.  Now I am getting the below error. I have included a few lines of the text errors.   Search Lag Root Cause(s): The percentage of high priority searches lagged (100%) over the last 24 hours is very high and exceeded the yellow thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=1. Total lagged Searches=1   07-07-2020 08:49:22.130 -0400 INFO SavedSplunker - savedsearch_id="nobody;Splunk_SA_CIM;_ACCELERATE_DM_Splunk_SA_CIM_Performance_ACCELERATE_", search_type="datamodel_acceleration", user="nobody", app="Splunk_SA_CIM", savedsearch_name="_ACCELERATE_DM_Splunk_SA_CIM_Performance_ACCELERATE_", priority=highest, status=success, digest_mode=1, scheduled_time=1594126140, window_time=0, dispatch_time=1594126143, run_time=18.078, result_count=74, alert_actions="", sid="scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5534aac642f80d961_at_1594126140_8224", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool="" 07-07-2020 08:49:18.705 -0400 INFO SavedSplunker - savedsearch_id="nobody;Splunk_SA_CIM;_ACCELERATE_DM_Splunk_SA_CIM_Intrusion_Detection_ACCELERATE_", search_type="datamodel_acceleration", user="nobody", app="Splunk_SA_CIM", savedsearch_name="_ACCELERATE_DM_Splunk_SA_CIM_Intrusion_Detection_ACCELERATE_", priority=highest, status=success, digest_mode=1, scheduled_time=1594126140, window_time=0, dispatch_time=1594126142, run_time=15.251, result_count=93, alert_actions="", sid="scheduler__nobody_U3BsdW5rX1NBX0NJTQ__RMD5eddd0618b168fff8_at_1594126140_8223", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool="" 07-07-2020 08:49:10.432 -0400 INFO SavedSplunker - savedsearch_id="nobody;SA-ThreatIntelligence;Threat - Correlation Searches - Lookup Gen", search_type="scheduled", user="nobody", app="SA-ThreatIntelligence", savedsearch_name="Threat - Correlation Searches - Lookup Gen", priority=default, status=success, digest_mode=1, scheduled_time=1594126140, window_time=0, dispatch_time=1594126143, run_time=5.586, result_count=1, alert_actions="",   I have gone through each part of the applications listed and do not have any saved searches that need to be rebuilt or 
I created a Dashboard in the Search & Reporting app and was not given the option to set the sharing/permissions at the time of creation. And when I go to the Dashboards page and expand the "Edit" dro... See more...
I created a Dashboard in the Search & Reporting app and was not given the option to set the sharing/permissions at the time of creation. And when I go to the Dashboards page and expand the "Edit" dropdown under the "Actions" column I do not see an option for "Edit Permissions". My Splunk user account has the power and admin roles, and I've confirmed that the Search & Reporting app has granted write permission to the admin role. I have other colleagues that have the same roles as I have, and they ARE able to change permissions on Dashboards, so I believe that my user account may be misconfigured. Could someone please help me resolve this issue?
I'm using splunk-bunyan-logger to log to splunk. The example on https://github.com/splunk/splunk-bunyan-logger suggests using it like:     Logger.info({ message: { temperature: "70F", ... See more...
I'm using splunk-bunyan-logger to log to splunk. The example on https://github.com/splunk/splunk-bunyan-logger suggests using it like:     Logger.info({ message: { temperature: "70F", chickenCount: 500 } }, "Chicken coup looks stable.");      I'm using it like:     logger.info({ name, type: 'queryPerformance', ms }, `${name} took ${ms} ms`);     Despite not wrapping my own fields (name, type and ms) in a `message` object, in splunk, they do still end up in a message object. So I have to search by `message.type` instead of just `type`. Also, the text message ("Chicken coup looks stable" or `${name} took ${ms}`) does not show up anywhere at all. Is there a better way to use splunk-bunyan-logger to make it log the way I want it to?
Hello splunkers. I am new to splunk and have a question on how to change index for events that e.g. have status 404 on index time?   props.conf  [weblogs] LINE_BREAKER = (&&&) NO_BINARY_CHECK = t... See more...
Hello splunkers. I am new to splunk and have a question on how to change index for events that e.g. have status 404 on index time?   props.conf  [weblogs] LINE_BREAKER = (&&&) NO_BINARY_CHECK = true REPORT-access = access-extractions SHOULD_LINEMERGE = false maxDist = 28 ... TRANSFORMS-change = notfound,changesourcetype transforms.conf [notfound] REGEX = ".+?"\s(404) DEST_KEY = MetaData:Index FORMAT = index::notfoundindex [changesourcetype] DEST_KEY = MetaData:Sourcetype REGEX = ^(.*) FORMAT = sourcetype::access_combined example of event: 141.146.8.66 - - [13/Jan/2016 21:03:09:200] "POST /category.screen?category_id=SURPRISE&JSESSIONID=SD1SL2FF5ADFF3 HTTP 1.1" 200 3496 "http://www.myflowershop.com/cart.do?action=view&itemId=EST-16&product_id=RP-SN-01" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.38 Safari/533.4" 294   Changing of sourcetype works fine, but index changing doesn't and I really do not know where the mistake is.
index=xx* app_name="xxx" OR cf_app_name="yyy*" OR app_name="ccc" |bucket _time span=1d |eval dayweek=strftime(_time,"%H") |convert timeformat="%m-%d-%y" ctime(_time) as c_time |eval Job = case(like... See more...
index=xx* app_name="xxx" OR cf_app_name="yyy*" OR app_name="ccc" |bucket _time span=1d |eval dayweek=strftime(_time,"%H") |convert timeformat="%m-%d-%y" ctime(_time) as c_time |eval Job = case(like(msg, "%first%"), "first Job", like(msg, "%second%"), "second Job", like(msg, "%third%"), "third job",like(msg, "%fourth%"), "fourth job")| stats count(eval(like(msg, "%All feed is completed%") OR like(msg, "%Success:%") OR like(msg, "%Success: %") OR like(msg, "%Finished success%"))) as Successcount count(eval(like(msg, "%Fatal Error: %") OR like(msg, "%Fatal Error:%") OR like(msg, "%Job raised exception%") AND like(msg, "% job error%"))) as failurecount by Job c_time dayweek |eval status=case((Job="fourth job") AND (dayweek=="Saturday" OR dayweek=="Sunday"),"NA",Successcount>0,"Success",failurecount>0,"Failure") | xyseries Job c_time status My result : Job date1 date2 date3 first Success Success Failure second Success Success Success   Set status color success as green and failure as red but its comes from xyseries c_time so i cant able to set color.
hi, i have data like below. i want to string into column values then need to join with my query. System                   effected Region a:b:c;d;e;f                  India i need like below. sy... See more...
hi, i have data like below. i want to string into column values then need to join with my query. System                   effected Region a:b:c;d;e;f                  India i need like below. system                     effected Region a                               India b                               India c                               India d                               India e                                India f                                 India   Thanks in advance
Hi can you help with these security questions about how Splunk handles sessions? (Either On-Premise Enterprise Splunk or in Cloud) We can't find anything about it in the Splunk Enterprise / Splunk C... See more...
Hi can you help with these security questions about how Splunk handles sessions? (Either On-Premise Enterprise Splunk or in Cloud) We can't find anything about it in the Splunk Enterprise / Splunk Cloud documentation.   What session management methodology is used (e.g. non-persistent cookies, session tokens stored in the database, by URL)? (Assume non-persistent cookies) How is the session token constructed (i.e. data elements)? What steps have been taken to ensure that it is effectively non-forgeable? If encryption is used, provide algorithm and key length? Is a session table used to manage sessions? If so, does it exist in memory on the web server, or is it stored further back in the system hierarchy? Does it contain passwords (either plain text or encrypted)?  
Hello I noticed a lot of the events not the same timestamp as Splunk. Can you tell me how I can compare the date of the event and timestamp of splunk please? Best regards  
I have a search triggering for a certain failed threshold for a monitored value. Instead of making 7 alerts one per customer, I made one search and one alert creating a table of results. Hence I need... See more...
I have a search triggering for a certain failed threshold for a monitored value. Instead of making 7 alerts one per customer, I made one search and one alert creating a table of results. Hence I needed to use the "Trigger for each result" option in alerts. Then I needed to suppress per customer when the trigger value exceeded threshold. My alert searches every minute for the last 15 minutes, and is supposed to throttle for 15 minutes on hit. Googling and documentation suggest setting 'customer' field in the "Suppress results containing field value" text box in Splunk. This did not suppress when "For each result" was enabled, and I got an alert every minute. So how to do it?
We are facing some issue while creating ticket, For the first run of correlation, notable events are generating and grouping it into Episode, however, Its creating multiple(for each events in the ... See more...
We are facing some issue while creating ticket, For the first run of correlation, notable events are generating and grouping it into Episode, however, Its creating multiple(for each events in the episode) tickets for the episode at the first time, from the second run notables are getting duplicated into the episode, all the new notables are getting updated to the ticket which created with first alert in the episode in the first run of correlation search. Please let us know if it’s known behavior, if yes what is the logic behind it? or any specific setting/fields needs to be modified while raising the tickets raising tickets ?
[2020-07-07 12:40:01+0200] workspace_sandbox RUNNING pid 17159, uptime 21 days, 21:43:58   i have this line of log but i want to extract only workspace_sandbox as a field called Services   im usi... See more...
[2020-07-07 12:40:01+0200] workspace_sandbox RUNNING pid 17159, uptime 21 days, 21:43:58   i have this line of log but i want to extract only workspace_sandbox as a field called Services   im using rex "(^(?<Service>\s\s\w+.\w+))\s\s" but having no luck.    Also want to extract "Running" as status    
Hi,  I have 3 clustered Search Heads, one of them is an ES Search Head.  The ES Search Heads holds a lot of scheduled reports that causing (in my opinion) a lot of problems with "skipped searches".... See more...
Hi,  I have 3 clustered Search Heads, one of them is an ES Search Head.  The ES Search Heads holds a lot of scheduled reports that causing (in my opinion) a lot of problems with "skipped searches". I want to transfer most of the scheduled reports from the ES to another Search Head in the Cluster.  Looking at the answers, I saw a couple of them talking about transferring from a stand-alone or non-clustered search head to a clustered etc.  In my case, all of them are clustered and I'm just looking for the best way to move them.    Thank you.
Hi, We are trying to install UF in a windows machine. We are receiving the below error. We are trying to install the UF in a folder in E drive, I am wondering if there is something wrong with the sc... See more...
Hi, We are trying to install UF in a windows machine. We are receiving the below error. We are trying to install the UF in a folder in E drive, I am wondering if there is something wrong with the script and it checks for UF file path name in the C drive. Unfortunately, we do not own the script used to automate the UF deployment. I am looking for some clues/hints.     Looks like this system is a "64bit" OS Installing 64bit Splunk Setting up the admin account The system cannot find the path specified. Copying deployment server app Does C:\Program Files\SplunkUniversalForwarder\etc\apps\asla_all_deploymentclient specify a file name or directory name on the target (F = file, D = directory)? d Invalid path   .  
Hello, We’d like to synchronize Correlation Searches with our Incident management tool, The Hive. We could use TA-Thehive to create an Adaptive Response Action in Correlation Search configuration. H... See more...
Hello, We’d like to synchronize Correlation Searches with our Incident management tool, The Hive. We could use TA-Thehive to create an Adaptive Response Action in Correlation Search configuration. However, the difficulty is that in addition to Correlation Search data we'd like to synchronize the Notable Data as well, like its event_id, next steps, etc. The only way to do it we were able to find is to create an additional alert based on Notable index for every Correlation Search, then use the TA to create a response action for this alert and to send all Notable Event data to our incident management tool. This solution has one main issue: For every Correlation Search we need to create an additional alert (time consuming) The alert’s query is based on Notable Index while our Correlation Searches query uses tstats (performance impact) I’d like to know if anyone faced the same issue before and was able to find a better solution. Thanks for the help. Alex.
Hello! It's my first time writing here so forgive me if my question may lack information. What I want to do:  I want to execute a batch file via scripted input and write the output of this script ... See more...
Hello! It's my first time writing here so forgive me if my question may lack information. What I want to do:  I want to execute a batch file via scripted input and write the output of this script into a specific log. then I want to send this log to be indexed in another server. all of this should later on be deployed within an app to a universal forwarder which executes the script, writes the log and sends it to a specific server into a specific index.    What I've done so far: I've created an app which as a script in /bin that is basically changing the passwd of the universal forwarder and creating a log in which it echos certain statements. the script itself looks like this:      #!bin/sh FILE=/opt/splunkforwarder/etc/passwd if test -f "$FILE"; then echo $(date) " $FILE existiert." >> /opt/splunkforwarder/etc/apps/myapp/logging/changepw.log #mv /opt/splunkforwarder/etc/apps/myapp/local/inputs.conf /opt/splunkforwarder/etc/apps/myapp/local/inputs.conf.bak mv /opt/splunkforwarder/etc/passwd /opt/splunkforwarder/etc/passwd.bak echo $(date) " $FILE wurde umbenannt und wird neu erstellt.Inputs.conf wurde deaktiviert" >> /opt/splunkforwarder/etc/apps/myapp/logging/changepw.log /opt/splunkforwarder/bin/splunk restart else echo $(date) " $FILE existiert nicht." >> /opt/splunkforwarder/etc/apps/myapp/logging/changepw.log fi       so as of now it should do the following: - check if there is a passwd - if yes, rename it to passwd.bak , renaming my inputs.conf to inputs.conf.bak (so it uses the inputs.conf in default which has a deactivated scripted input) and then restart splunk. after each previous step it writes a message into changepw.log    the inputs.conf looks like this:      [script://./bin/change.sh] disabled = 0 interval= -1 [monitor:///opt/splunkforwarder/etc/apps/myapp/logging/*] disabled = 0 index = main         my outputs.conf looks like this:     [tcpout] defaultGroup = splunk_indexer [tcpout-server://<ip>] [tcpout:splunk_indexer] disabled = false server = <ip>:9997         what the problem is: when I start the script it does at it was told. changing the passwd and renaming it to passwd.bak and writing all echos into a changepw.log then restarting splunk. for whatever reason it doesn't seem to send anything to my server. I've already checked whether my forwarder is active. it is  I can ping the server from the UF I've created a test.log in the same folder in which my changepw.log resides and filled it with some text. after a few moments it appeared on my server, indexed.  splunk is starting with user splunk and has all the necessary rights to execute, read and write anything within /splunkforwarder   did I leave somthing out? I feel like I'm standing right in front of a wall. hope someone can help! edit: I've noticed that, when I deactivate the script in my inputs.conf and comment out the mv inputs.conf inputs.conf.bak part and start the change.sh, then it works just fine and my server shows the log. why can that be? I assume that, when I mv the inputs.conf the script ends even tho it already started. can that be? if so, the final question would be how does the script need to look like in order to do the following: - check if there is a passwd, if so change it to passwd.bak , write everything in a log and restart splunk. after restarting splunk should not start the script again.  
Hi Splunkers, I have enabled the batch mode for a date field with below query in DB Connect : SELECT * FROM SCHEMANAME.TABLENAME WHERE Termination_date >= from_unixtime(unix_timestamp()-1*60*60*2... See more...
Hi Splunkers, I have enabled the batch mode for a date field with below query in DB Connect : SELECT * FROM SCHEMANAME.TABLENAME WHERE Termination_date >= from_unixtime(unix_timestamp()-1*60*60*24, 'yyyy-MM-dd') ORDER BY Termination_date DESC;   The Table doesnt have any primary key and hence making using of batch mode in db connect to retrieve all the data from the table when comparing with one of the date fileds in the table "Termination_date".   The table generates 5000 rows in a day. Hence I have given a condition to schedule the script every 300 seconds and retrieve 300 rows .   My Question:: Will it retrieve last 300 rows of the day or it will keep on ingesting first 300 rows from the table into splunk (I have given DESC in the sql query). Is there any other solution to get the data by using the same date field as there is no primary key.   Thanks in advance.
Hi All, We are unable to get Salesforce event log into Splunk.  Getting 400 error code. PFB the error message details. Also note that, we are able to query the data and fetch the results by the sam... See more...
Hi All, We are unable to get Salesforce event log into Splunk.  Getting 400 error code. PFB the error message details. Also note that, we are able to query the data and fetch the results by the same user through Salesforce, it is just that when we connect through Splunk it doesn't work. Here is an error message: 2020-07-07 07:47:13,452 +0000 log_level=ERROR, pid=32523, tid=MainThread, file=engine_v2.py, func_name=start, code_line_no=57 | [stanza_name=event] CloudConnectEngine encountered exception Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_salesforce/bin/splunk_ta_salesforce/cloudconnectlib/core/engine_v2.py", line 52, in start for temp in result: File "/opt/splunk/etc/apps/Splunk_TA_salesforce/bin/splunk_ta_salesforce/cloudconnectlib/core/job.py", line 88, in run contexts = list(self._running_task.perform(self._context) or ()) File "/opt/splunk/etc/apps/Splunk_TA_salesforce/bin/splunk_ta_salesforce/cloudconnectlib/core/task.py", line 288, in perform raise CCESplitError cloudconnectlib.core.exceptions.CCESplitError     2020-07-07 07:47:13,451 +0000 log_level=ERROR, pid=32523, tid=MainThread, file=task.py, func_name=_send_request, code_line_no=504 | [stanza_name=event] The response status=400 for request which url=https://xxxyyyy.salesforce.com/services/data/v48.0/query?q=SELECT%20Id%2CEventType%2CLogDate%2CCreatedDate%20FROM%20EventLogFile%20WHERE%20CreatedDate%3E%3D2020-06-07T00%3A00%3A00.000z%20AND%20Interval%3D%27Hourly%27%20ORDER%20BY%20CreatedDate%20LIMIT%201000 and method=GET.