All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi , I am trying to troubleshoot the splunk Add-on for Microsoft Cloud Services.I checked at the following location but could not get any data. Checkpoint Directories Azure Storage Blob $SPLUN... See more...
Hi , I am trying to troubleshoot the splunk Add-on for Microsoft Cloud Services.I checked at the following location but could not get any data. Checkpoint Directories Azure Storage Blob $SPLUNK_HOME/var/lib/splunk/modinputs/mscs_storage_blob   Azure blob storage: If you can't get data, check that you are using the correct Account Name and Account Secret. Use the query in the preceding table to check for errors. If you have no errors and cannot collect data, remove the checkpoint file and try again. And I believe I have provided the correct Account name and Account Secret. Please advice why I am unable to get the data? Regards, Rahul
Hello I am getting my vpn logs in syslog format on my single splunk deployment instance and I am having trouble figuring out the proper way to extract the fields.        Aug 11 15:57:00 uf-log-a... See more...
Hello I am getting my vpn logs in syslog format on my single splunk deployment instance and I am having trouble figuring out the proper way to extract the fields.        Aug 11 15:57:00 uf-log-ads-01.s.uw.edu 122.211.777.777/111.111.111.112 {"EVENT":"ACCESS_SESSION_OPEN","session.server.network.name":"dept-falconsonnet-ns.uf.edu","session.server.landinguri":"/dservers","session.logon.last.username":"carl","session.saml.last.attr.friendlyName.eduPersonAffiliation":"| member | staff | employee | alum | faculty |","session.client.platform":"Win10","session.client.cpu":"WOW64","session.user.clientip":"111.11.111.111","session.user.ipgeolocation.continent":"NA","session.user.ipgeolocation.country_code":"US","session.user.ipgeolocation.state":"Georgia","session.user.starttime":"1597186611","sessionid":"b5b42313cbb528a386beafff72cd5cef"}       Well now I am trying to figure out what the best way it is to extract the field names that I care about. I was having trouble figuring this out because when I do the delimiter extraction and separate by comma I have the problem of seeing "session.saml.last.attr.friendlyName.eduPersonAffiliation":"| member | staff | employee |"     Alright so I ruled out the delimiter extraction. I began doing regex and everything was going good until  I noticed that the the field continent , after extracting it, saving it, and then doing a search, was picked up by only some events and others were missing it even though the variable and continent were the same in this case "NA" The same thing happened with the sessionid field. I compared both logs they were coming from the same source, sourcetype, index. Main differences were the sessions, session start times, ips, usernames. The syslog comes into the server and writes to a .log file on my splunk server so that is how I got the data to be indexed on splunk by monitoring a directory. But now I am stuck and not sure how I should approach this problem.  
We are setting up Splunk RWI solution and have been request by the Splunk engineer to open ports 4443 and 4444 to allow comms to on-prem HF from their respective public endpoint URLs for Zoom and Tea... See more...
We are setting up Splunk RWI solution and have been request by the Splunk engineer to open ports 4443 and 4444 to allow comms to on-prem HF from their respective public endpoint URLs for Zoom and Teams data. Given that both are cloud based data sources, I was wondering why this cant be done straight on the IDM itself ?
Hi, I've created an app to be able to monitor some directories for two hosts. The stanzas are completely identical except for the directory path. Is there a way to specify in the inputs.conf which... See more...
Hi, I've created an app to be able to monitor some directories for two hosts. The stanzas are completely identical except for the directory path. Is there a way to specify in the inputs.conf which host uses which stanza?
According to documentation here, under the title "Clear a setting": https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Howtoeditaconfigurationfile A configuration setting that appears in d... See more...
According to documentation here, under the title "Clear a setting": https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/Howtoeditaconfigurationfile A configuration setting that appears in default can be overridden by an empty setting in local. This often works for things like FIELDALIAS, EVAL, EXTRACT, REPORT and others except I notice it does NOT work for the INDEXED_EXTRACTIONS setting. It looks like the routine that validates this setting will choke if one of the known-good values is not present. So then, if a vendor set INDEXED_EXTRACTIONS=json in their add-on, I might try and set INDEXED_EXTRACTIONS= in the local/props.conf for that same sourcetype, hoping to instead do my JSON on a select few json nodes. If I did try that, and as Splunk has documented it, then I would find the file would no longer be read in at all. Instead I'd find the following in splunkd.log: ERROR IndexedExtractionsConfig - Invalid value=''  for parameter='INDEXED_EXTRACTIONS'. and that would be followed by: ERROR TailReader - Ignoring path="/myvendorApp/logs/filename.log" due to: Invalid indexed extractions configuration - see prior error messages If anyone knows how to make this work for INDEXED_EXTRACTIONS, please let me know.
I have an index where each event has unique EventID and Status fields. Each event is progressing through multiple interim statuses until it reaches one of the two terminal statuses: SUCCESS or FAI... See more...
I have an index where each event has unique EventID and Status fields. Each event is progressing through multiple interim statuses until it reaches one of the two terminal statuses: SUCCESS or FAILURE.  Each event goes through a subset of all possible interim statuses. I'm trying to build a timechart that would show two counts:  All Failed Events and Failed Events  with a certain Interim Status. One of the problems is that a preceding interim event could be outside of the span interval. I was thinking something along these lines (not necessarily syntactically correct):    index=... sourcetype=... Status IN ("FAILURE", "INTERIM") | timechart span=5m count by EventId | untable _time eventCount | stats count as "All" count(eval(EventCount==2)) as "With Interim" by _time    
Need to extract source and target path fields from a logged command line for an application called Aspera SCP, part of IBM Aspera file transfer service. The command lines are logged via events such a... See more...
Need to extract source and target path fields from a logged command line for an application called Aspera SCP, part of IBM Aspera file transfer service. The command lines are logged via events such as these:   C:\Program Files\Aspera\Enterprise Server\bin\ascp.exe -T -Q -d -l 50000 -m 25000 -k 2 -i C:\Users\asperaadmin\.ssh\asperaweb_id_dsa.openssh -O 33001 -P 33001 --ignore-host-key --mode=send --user=xferuser --host=ats-aws-us-whatever.com Z:\Content\metadata\somefile.xml /we-shall-anonimyze-this-one-too-b2c09898392f C:\Program Files\Aspera\Enterprise Server\bin\ascp.exe -T -Q -d -l 300000 -m 10000 -k 2 -O 33001 -P 22 --ignore-host-key Z:\Source Files\Content\image.jpg username@target.host.net:/target/path/directory   Note that the flags may have filenames after them - that may contain spaces; filenames may contain spaces as well. The commands follow Aspera Command Reference:   ascp options [[user@]srcHost:]source_file1[,source_file2,...] [[user@]destHost:]target_path   Questions: What is the best mechanism to extract fields such as username, hostnames, filenames for both source and target parts of the above events / commands, and optionally command flags, at search time? A single complex rex statement with optional groups given some of the fields are optional? Multiple simpler rex statements? Would appreciate writing SPL for extract these fields that would work for the above two events. P.S. I tried writing several rex statements to extract the ascp filename, ignore most flags, then one source filename, and finally target user, host and path for the above two statements - and got stuck - my multiple rex statements are stepping on each other and thus do not seem to be the best mechanism. The code below isn't working properly.   | rex field=event_message "^(?P<program_path_win>\w\:\\\.*?\\\(?P<program_module_win>[^\\\]+\.\S+))\s+(?P<event_msg_tail>.+)$" | rex field=event_msg_tail ".+\s+(?P<file_path_win>\w\:\\\.*?\\\(?P<file_name_win>[^\\\]+\.\S+))\s+(?P<Destination>((?P<peer_userID>.+)\@)(?P<peer_host>\S+))\:|s+(?P<peer_dir>.*)?$" | rex field=event_msg_tail "--host=(?P<peer_host>\S+)\s+" | rex field=event_msg_tail "--user=(?P<peer_user>\S+)\s+" | rex field=event_msg_tail ".+\s+(?P<file_path_win>\w\:\\\.*?\\\(?P<file_name_win>[^\\\]+\.\S+))\s+(?P<Destination>((?P<peer_userID>.+)\@)(?P<peer_host>\S+))\:|s+(?P<peer_dir>.*)?$" | rex field=event_msg_tail "-i\s+(?P<private_key_file>\w\:\\\.+?)\s+(?:-\w+\s+|--\w+=|$)" | rex field=event_msg_tail ".+\s+(?P<file_path_win>\w\:\\\.*?\\\(?P<file_name_win>[^\\\]+\.\S+))\s+(?P<peer_dir>.*)?$" | eval peer_dir = coalesce(peer_dir, "") | eval peer_host = coalesce(peer_host, "") | eval peer_user = coalesce(peer_user, peer_userID, "") | eval agent = coalesce(program_module_win, "") | table _time host log_level component agent peer_host peer_user peer_userID peer_dir   Thanks! P.S. I inadvertently posted this to the wrong section - to "Dashboards & Visualizations" rather than "search" - but don't see an option to move the post. Would someone please kindly move it - or tell me how?
Hi    I am not sure how to put up this question but all I am trying to do is basically hide the below "All" option from the dashboards page in an App.  Right now, when someone clicks the "All" opt... See more...
Hi    I am not sure how to put up this question but all I am trying to do is basically hide the below "All" option from the dashboards page in an App.  Right now, when someone clicks the "All" option it's showing all the existing dashboards from the other Apps to. Now I want to restrict this App to show only the "Your's" and "This App's".    Check the below picture for better understanding.     
Hi All, I am trying to access Splunk from inside the Azure Databricks instances. I have requirements to run queries for the period of 6 hours. I break it down into 30 minute windows and submit the ... See more...
Hi All, I am trying to access Splunk from inside the Azure Databricks instances. I have requirements to run queries for the period of 6 hours. I break it down into 30 minute windows and submit the jobs. For each window, I send the query using Python splunklib, execute it, and then persist the result and finally delete my job. Then I try the next window, and so on. However, what happens it works for the first 2 or 3 windows without any issues but it fails on the next one with authentication issues. This was running without any issue before 2 weeks.  Code Used to authenticate:  client.connect(host=splunk_auth_params["host"], username=splunk_auth_params["uname"], password=splunk_auth_params["password"]) Error: AuthenticationError: Request failed: Session is not logged in. HTTPError: HTTP 401 Unauthorized -- call not properly authenticated During handling of the above exception, another exception occurred: I am using Python splunklib. Any help would be greatly appreciated.
We would like to disallow our users to use real-time searches. Where do we block the feature from the users?
Hello, In my search query I've defined the 3 email_subjects and 3 email_addresses with eval to which I want to send an alert based on threshold defined. e.g if threshold value is =1 then email_subj... See more...
Hello, In my search query I've defined the 3 email_subjects and 3 email_addresses with eval to which I want to send an alert based on threshold defined. e.g if threshold value is =1 then email_subject1 and email_address1 etc. My output being in table format because of which for availing $result.feildname$ values,  I'll have to add email_subject and  email_address fields in search result table (definitely not desired) - that being the issue I'm stuck at, same issue I faced with "sendemail" as well. Is there an alternate way to send email alert via splunk itself (no script)? @fk319 @woodcock @MuS @bmunson_splunk 
  Dear, I need to identify some duplicate events that are right after the "Call-ID:", however in Splunk I am not getting him to identify this field: index=teste "*CALL-ID*" : Aug 11 14:50:42 10.... See more...
  Dear, I need to identify some duplicate events that are right after the "Call-ID:", however in Splunk I am not getting him to identify this field: index=teste "*CALL-ID*" : Aug 11 14:50:42 10.178.214.7 1 2020-08-11T14:50:41.979000-03:00 localhost GroupSeries - - [NXLOG@14506 EventReceivedTime="2020-08-11 14:50:42" SourceModuleName="plcmlog" SourceModuleType="im_file"] CEng: SIPMSG: Call-ID: 2932867290-4209 Aug 11 14:50:34 10.53.96.71 1 2020-08-11T14:50:25.326000-03:00 G7500-4D2120F2 GroupSeries - - [NXLOG@14506 EventReceivedTime="2020-08-11 14:50:34" SourceModuleName="plcmlog" SourceModuleType="im_file"] CEng: SIPMSG: Call-ID: 1112255280-4006 Aug 11 14:50:34 10.53.96.71 1 2020-08-11T14:50:25.080000-03:00 G7500-4D2120F2 GroupSeries - - [NXLOG@14506 EventReceivedTime="2020-08-11 14:50:34" SourceModuleName="plcmlog" SourceModuleType="im_file"] CEng: SIPMSG: Call-ID: 1112255280-4006  
All of our Splunk users, including members of our Leadership Team are currently in the US/Eastern time zone. All of the incoming logs are being indexed at UTC, and the indexes are being used to creat... See more...
All of our Splunk users, including members of our Leadership Team are currently in the US/Eastern time zone. All of the incoming logs are being indexed at UTC, and the indexes are being used to create dashboards for our Leadership Team. There is a desire for the dashboards to switch over to showing data for the next day at 12AM ETC rather than 12AM UTC. UTC is 4 hours ahead of ETC. The logic that the team is currently using to "force" two of our dashboards to display data in accordance to ETC time is as follows (the entire SPL isn't included since it may not be relevant to the problem we're trying to solve): --- Dashboard 1 index=data1 sourcetype=datatype1 | eval epoch_Timestamp=strptime(Timestamp, "%Y-%m-%dT%H:%M:%S.%3QZ")-14400 --- Dashboard 2 index=data2 sourcetype=datatype2 | eval epoch_file_create_date=strptime(file_create_date, "%Y-%m-%d %H:%M:%S.%3Q")-14400, epoch_file_update_date=strptime(file_update_date, "%Y-%m-%d %H:%M:%S.%3Q")-14400   Notice that there is a 4 hour offset in both SPL queries. This is the team's rationale for converting the data associated with incoming logs from UTC to ETC. However, this approach alone isn't causing the dashboards being populated with data to switch over to the next day at 12AM ETC. Instead, the switch-over is occurring at 12AM UTC. So, basically, any data generated starting at 8PM ETC is being categorized over to the next day, when the expectation is that it gets categorized for the current day. Also important to note is that props.conf on the indexer is currently not configured with a timezone setting to US/Eastern. What is the best approach to get the dashboards to switch-over at 12AM ETC rather than 12AM UTC? Should props.conf be updated AND the -4 hr offset be removed from the SPL queries? Or, is there a better approach that can be taken?
Currently I have splunk injecting AWS logs showing NACL's. Each event has an array that is called network_acl_entries. This is a list of objects, each object has a cidr/block field and rule_action fi... See more...
Currently I have splunk injecting AWS logs showing NACL's. Each event has an array that is called network_acl_entries. This is a list of objects, each object has a cidr/block field and rule_action field. I'm trying to display in a table each rule that is not a deny on subnet 0.0.0.0/0. I can't find a way to remove the entire object from the list if network_acl_entires.cidr_block=0.0.0.0/0 and  network_acl_entires.rule_action="allow". There's not a way to correlate the data. I put them in a table, and I can individually remove all Deny's but it still lists those cidr's associated with the denys.    The table for the search looks like:   index=__aws aws_account_id="*" region="*" source="*:vpc_network_acls" sourcetype="aws:description" | dedup associations{}.id | rename network_acl_entries{}.cidr_block as cidr, network_acl_entries{}.egress as egress, network_acl_entries{}.rule_action as rule, associations{}.subnet_id as subnet, network_acl_entries{}.port_range.to_port as "to port", network_acl_entries{}.port_range.from_port as "from port", network_acl_entries{}.rule_number as rule_Number | table index account_id vpc_id tags.Name id subnet rule_Number cidr, egress, rule, "to port", "from port" Each row in the table is a separate vpc that lists all NACL's and the cidr's that are open/closed.
I have IBM WebSphere related configuration XML files. Which will get changed whenever any configuration changes happened. I want to compare the my previous XML file content with current XML file base... See more...
I have IBM WebSphere related configuration XML files. Which will get changed whenever any configuration changes happened. I want to compare the my previous XML file content with current XML file based on timestamp. and output should be the difference between both. Is this possible to achieve in Splunk ?   @skakehi_splunk @richgalloway @rnowitzki @woodcock @somesoni2 @niketn 
I have below kind of data. App Name Status App1                0 App2               0 App3               0 App4               0 App5               0 App6               1 App7               0 ... See more...
I have below kind of data. App Name Status App1                0 App2               0 App3               0 App4               0 App5               0 App6               1 App7               0 App8               0 App9               0 App10            0 0 - Success 1 - Failure Assign, 0 as 100% 1 as 0% Here status value will get updated every 5 mins. So my requirement is to calculate the average from the starting time of day till present time by app. In this table, success percentage will be 90%. Similarly query should work if i want to calculate the average time after n'th update. Next thing is i want to keep current status in one variable. and my final output should be having 2 data, one is current status and other one is Average data.
Getting this informational message when running "stats count" commands: This search uses deprecated 'stats' command syntax. This syntax implicitly translates '<function>' or '<function>()' to '<func... See more...
Getting this informational message when running "stats count" commands: This search uses deprecated 'stats' command syntax. This syntax implicitly translates '<function>' or '<function>()' to '<function>(*)', except for cases where the function is 'count'. Use '<function>(*)' instead. I don't understand it. What am I doing wrong and what should I be doing instead? A sample of the stats command generating the message above: | stats sparkline count(Destination) AS sessions by Destination_URL, Destination_userID Thanks!
I had lost my search-head and cluster-master and when I tried to restore the files I already had backed up.  The app says I can not authenticate to salesforce.  The client secret is encrypted in the ... See more...
I had lost my search-head and cluster-master and when I tried to restore the files I already had backed up.  The app says I can not authenticate to salesforce.  The client secret is encrypted in the file still ***** but it will not connect automatically. Do I have to re-add everything manually again for all my accounts? I am using OAuth.   is there something else I should be doing to back and restore my configuration files for the app. 
We have 4 reports (A, B, C, D) that we had scheduled to run daily and then email results to us. For a while, all 4 ran great and worked perfectly. Then we upgraded to v8.0 and now reports A and B st... See more...
We have 4 reports (A, B, C, D) that we had scheduled to run daily and then email results to us. For a while, all 4 ran great and worked perfectly. Then we upgraded to v8.0 and now reports A and B still run properly and work great but reports C and D run properly but do not email to us. When we run them manually we can see the line that says "This scheduled report runs daily, at 11:00. Its time range is last 24 hours. The following results were generated X hours ago." where X is the number of hours since the report ran as scheduled at 11:00. We've tried tweaking the schedule, deleting and re-scheduling the entire report, changing from Report to Alert, nothing seems to get the email working. The report runs correctly no matter how we set it up but the emails never get sent from C and D. A and B work great and we get emails from those runs daily. Any thoughts? Thanks!
I require to send OS and Authorisation logs from a AS400 server to Splunk, unfortunately no universal forwarder exists for the AS400.  Searching through  the community I located the thread for  'AS40... See more...
I require to send OS and Authorisation logs from a AS400 server to Splunk, unfortunately no universal forwarder exists for the AS400.  Searching through  the community I located the thread for  'AS400 iSeries app/collection?', for which the solution provided a link to a custom unsupported app (http://splunk-base.splunk.com/apps/24097/splunk-for-as400-iseries) Having downloaded the app, which is old, there is very little detail or instructions on how to use the app. We can get the respective log files from the AS400, but without detail on the app we do not know how it works or what configuration needs to be made. The app may be a solution for us if we could figure out how it is to be used. Can anyone share their experiences of this particular app and how it should be configured for the AS400 to push the logs to Splunk? Thanks