All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a doubt. If we are using heavy forwarder to parse the data and forward it to indexers, does it need Enterprise license or just the forwarder license? Can I use something like-> ./splunk edi... See more...
I have a doubt. If we are using heavy forwarder to parse the data and forward it to indexers, does it need Enterprise license or just the forwarder license? Can I use something like-> ./splunk edit licenser-groups Forwarder -is_active 1. I don't want to use my HF as deployment server or LM or Monitoring Console. It's use is going to be to just  parse data received from UF and forward it to peers or indexers.  Let's say now if I install add-ons like Splunk db connect. Then, would it require Enterprise license or just forwarder license would suffice? I am still just forwarding the data.    
While running below search I am not getting any events: index=main_vulnerability_database sourcetype=vulnerability_overview _bkt="main_vulnerability_database~0~FB1A6C9D-87F2-4A38-B420-94F2171CE493"... See more...
While running below search I am not getting any events: index=main_vulnerability_database sourcetype=vulnerability_overview _bkt="main_vulnerability_database~0~FB1A6C9D-87F2-4A38-B420-94F2171CE493" _cd=0:1015   But while adding search command I am getting events: index=main_vulnerability_database sourcetype=vulnerability_overview | search _bkt="main_vulnerability_database~0~FB1A6C9D-87F2-4A38-B420-94F2171CE493" _cd=0:1015   Ideally both should give same results.  Looking for reason why it is happening.
Hello team So far we have been ignoring error ERROR HTTPClient - Should have gotten at least 3 tokens in status line, while getting response code. Only got 0. present in almost each search.l... See more...
Hello team So far we have been ignoring error ERROR HTTPClient - Should have gotten at least 3 tokens in status line, while getting response code. Only got 0. present in almost each search.log from our use cases, however looking at the comment from Splunk employee here
After Splunk upgrade 9.1.0.2 activity, we have found many errors and below is one of them, This error is in all the indexers present in the environment.   7.7% 07-06-2023 07:54:39.849 +0000 ERROR... See more...
After Splunk upgrade 9.1.0.2 activity, we have found many errors and below is one of them, This error is in all the indexers present in the environment.   7.7% 07-06-2023 07:54:39.849 +0000 ERROR SearchProcessRunner [45132 PreforkedSearchesManager-0] - preforked search=0/21892 on process=0/11297 caught exception. completed_searches=2, process_started_ago=32.000, search_started_ago=0.034, search_ended_ago=0.000, total_usage_time=3.848
Hello everyone, Trust you are all having a lovely day, Please i want to find out if there are any activities  i can do   periodically to maintain my appdynamics setup and prevent any possible downt... See more...
Hello everyone, Trust you are all having a lovely day, Please i want to find out if there are any activities  i can do   periodically to maintain my appdynamics setup and prevent any possible downtime Thanks
Encountering random skipped searches/ slow ui access.
How can i create a stacked bar graph showing the different log levels (Error, Info, Debug)  generated by  each  Process  index="intau_workfusion" sourcetype=workfusion.out.log host=* | rex "^(?... See more...
How can i create a stacked bar graph showing the different log levels (Error, Info, Debug)  generated by  each  Process  index="intau_workfusion" sourcetype=workfusion.out.log host=* | rex "^(?<Date>\d+-\d+-\d+\s+\d+:\d+:\d+)\s+\[[^\]]*\]\s*\[(?<Process>[^\]]*)\]\s*\[(?<Step>[^\]]*)\]\s*\[(?<User>[^\]]*)\]\s*[^\[]+\s\[(?<Log_level>[^\]]+)" | search Log_level="*" | where Process != ""
I Have Splunk Enterprise (Windows) single entity and the indexes are in the drive and it is full and I have added new desk F: drive  I want to move my indexes to the new drive do I need to speci... See more...
I Have Splunk Enterprise (Windows) single entity and the indexes are in the drive and it is full and I have added new desk F: drive  I want to move my indexes to the new drive do I need to specify any change related to the new drive  
Hello all, I have created a dashboard with dashboard studio and have a list of visualizations for groups of servers [CPU Usage, memory Usage, Disk IO etc.,] If i have to navigate to a certain set... See more...
Hello all, I have created a dashboard with dashboard studio and have a list of visualizations for groups of servers [CPU Usage, memory Usage, Disk IO etc.,] If i have to navigate to a certain set of visualizations, I have to scroll a long list of other visaualizations. Is it possible to navigate to certain section within the same dashboard.?   Thank you,  
Currently, we group relevant alerts in alertmanager and then send them to Splunk on-call to make incident management more friendly. However, when a single alert within an incident is resolved, the en... See more...
Currently, we group relevant alerts in alertmanager and then send them to Splunk on-call to make incident management more friendly. However, when a single alert within an incident is resolved, the entire incident is marked as resolved. If is it possible not to mark the incident as resolved until all grouped alerts are resolved?
Hi, I want to prevent alerts from being skipped and I'm fine, that the alerts don't run at a specific time. I prefer to be notified with a delay than not at all.  One option is to set a schedule ... See more...
Hi, I want to prevent alerts from being skipped and I'm fine, that the alerts don't run at a specific time. I prefer to be notified with a delay than not at all.  One option is to set a schedule window. First of all, I'm wondering why the Alert Editing does not offer this option like reports do. I have to navigate to the Advanced Edit Mode to configure the schedule window. When it is configured, we allow the scheduler to delay the dispatch time. But at some point the search will be skipped anyway. Another option is to use the scheduling mode "continuous".  As far as I understand it, an alert with mode "continuous" is never skipped, which sounds reasonable to have a security monitoring without gaps.  I assume the scheduler will try to run the search as soon as possible. Is the continuous mode a best practice to avoid gaps or are there valid reasons not to use it? If the mode is used it might be a good idea to observe the scheduler lag more closely to determine "how late" alerts run and if the scheduler is building a huge backlog of delayed searches. I don't know how the scheduling_mode interacts with the schedule window. Does the schedule window have any effect, when the mode is "continuous"?
I have an alert set up to detect multiple invalid user credential sign in attempts, which runs once every 24 hours at 9am. However, once 9am rolls around I get an excessive number of alert emails fo... See more...
I have an alert set up to detect multiple invalid user credential sign in attempts, which runs once every 24 hours at 9am. However, once 9am rolls around I get an excessive number of alert emails for each of the invalid user credential sign in attempts. I'd love it if there was just one email with all the alerts listed in the email
Hello community. I'm trying to extract information from a string type field and make a graph on a dashboard. In the graph, I want to group identical messages. I encounter difficulties when grouping ... See more...
Hello community. I'm trying to extract information from a string type field and make a graph on a dashboard. In the graph, I want to group identical messages. I encounter difficulties when grouping a type of message that contains information about an id, which is different for each message and respectively for each message it returns a separate value. Ex. message: {"status":"SUCCESS","id":"123456789"}. I use this query: "source" originalField AND ("SUCCESS" OR "FAILURE") | stats count by originalField This query groups my fields that contain a FAILURE status, but does not group the SUCCESS ones because they have different IDs. I tried different substrings but it doesn't work. Can someone give me a solution?
<6>2023-08-17T04:51:52Z 49786672a6c4 PICUS[1]: {"common":{"unique_id":"6963f063-a68d-482c-a22a-9e96ada33126","time":"2023-08-17T04:51:51.668553048Z","type":"","action":"","user_id":0,"user_email":"",... See more...
<6>2023-08-17T04:51:52Z 49786672a6c4 PICUS[1]: {"common":{"unique_id":"6963f063-a68d-482c-a22a-9e96ada33126","time":"2023-08-17T04:51:51.668553048Z","type":"","action":"","user_id":0,"user_email":"","user_first_name":"","user_last_name":"","account_id":7161,"ip":"","done_with_api":false,"platform_licences":null},"data":{"ActionID":26412,"ActionName":"Zebrocy Malware Downloader used by APT28 Threat Group .EXE File Download Variant-3","AgentName":"VICTIM-99","AssessmentName":"LAB02","CVE":"_","DestinationPort":"443","File":"682822.exe","Hash":"eb81c1be62f23ac7700c70d866e84f5bc354f88e6f7d84fd65374f84e252e76b","Result":{"alert_result":"","has_detection_result":false,"logging_result":"","prevention_result":"blocked"},"RunID":109802,"SimulationID":36236,"SourcePort":"51967","Time":5}} I have a raw log like that, can you help me to parsing it into seperated lines ?
Hey Fellow Splunkers,   I'm having a bit of trouble perhaps understanding how this works and whether I'm doing this correct. Currently on version 9.0.2   Scenario Product Logs -> Syslog(H... See more...
Hey Fellow Splunkers,   I'm having a bit of trouble perhaps understanding how this works and whether I'm doing this correct. Currently on version 9.0.2   Scenario Product Logs -> Syslog(Has a HF on it) -> IDX Syslog is writing to one singular file that I monitor and it has multiple time formats in it and different line breaking. I basically want to bring in all the syslog from this certain product into one sourcetype, kinda like a staging area, then split them out based on REGEX. This is what I've got so far. Most of this is dummy data so dont worry about scrutinizing it for typo's etc.    Configuration This is all on the HF Inputs.conf [monitor://path/to/product/syslogs] index = syslog sourcetype = product_staging   Props.conf [product_staging] TRANSFORMS = change_sourcetype_one, change_sourcetype_two   [sourcetype_one] LINE_BREAKER = A line breaking example TIME_FORMAT = %m-%a-%d %H:%M:%S TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 30   [sourcetype_two] LINE_BREAKER = A line breaking example TIME_FORMAT = %C-%b-%a %M:%k:%S TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 30   Transforms.conf [change_sourcetype_one] DEST_KEY = MetaData:Sourcetype REGEX = (DataOne) FORMAT = sourcetype::sourcetype_one [change_sourcetype_two] DEST_KEY = MetaData:Sourcetype REGEX = (DataTwo) FORMAT = sourcetype::sourcetype_two   I can get the data to split, easily, my issue is, when it splits off into the different sourcetypes, the INDEXING TIME features, like TIME_FORMAT, TIME_PREFIX, LINE_BREAKER etc don't take effect on the new sourcetypes that were made from a split.   Is it simply because the original sourcetype [product_staging] has touched the data with its own settings, and now the other sourcetypes can't apply their own? I honestly don't understand what I'm doing wrong. Any help would be greatly appreciated.
I have to change my UBA instance IP because infra change.  After IP change was done, part of the UBA couldn't be brought up again. I did health check and found it's jammed by docker sock.  Anyon... See more...
I have to change my UBA instance IP because infra change.  After IP change was done, part of the UBA couldn't be brought up again. I did health check and found it's jammed by docker sock.  Anyone has such experience how to fix it? I saw some solutions like adding user to /var/run/docker.sock permission group. But I'm curious user "caspida" is permitted to sudo ALL command already. So that's the problem? In addition, all configuration I can see is per hostname. Not sure why IP change would have problem. I'm runing single instance version 5.2 Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Hi, I have a question and i hope received anwser the soon. I am using Splunk Enterprise and install in server CentOS 7. Openssh is using vesion 7.4 and 8.1. I want update openssh in all splunk serv... See more...
Hi, I have a question and i hope received anwser the soon. I am using Splunk Enterprise and install in server CentOS 7. Openssh is using vesion 7.4 and 8.1. I want update openssh in all splunk servers (8 server CentOS 7 include 2 search head cluster, 2 indexer cluster, 2 heavy forwarder, 1 deployment server and 1 master node) from 7.4, 8.1 to lastest openssh version still supported on CentOS 7. Version of splunk enterprise use is 8.0.7.  I would like to ask what effect the upgrade will have on Splunk's performance and what to prepare on Splunk before updating OpenSSH Thanks for all!
Hi, I am a bit new to the Splunk community and interested in building a Splunk app that can process host-level log data (particularly logs produced by audit D).  My end goal is to provide some anal... See more...
Hi, I am a bit new to the Splunk community and interested in building a Splunk app that can process host-level log data (particularly logs produced by audit D).  My end goal is to provide some analysis of the host log and report that back to the user in the Splunk dashboard. I am unsure how to do the first step of ingesting data from the host machine into the app.
Brand news servers. Not receiving all data from the UF. Confirmed connectivity. Confirmed inputs via "/opt/splunkforwarder/bin/splunk btool inputs list | grep bc_ | grep "\["", Only getting 2 sour... See more...
Brand news servers. Not receiving all data from the UF. Confirmed connectivity. Confirmed inputs via "/opt/splunkforwarder/bin/splunk btool inputs list | grep bc_ | grep "\["", Only getting 2 sourcetypes when there should be at least 16 for the index. Getting this error message: Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false). Getting this when starting splunkd: Splunk> Take the sh out of IT.   Checking prerequisites...         Management port has been set disabled; cli support for this configuration is currently incomplete.         Checking conf files for problems...                 Invalid key in stanza [webhook] in /opt/splunkforwarder/etc/system/default/alert_actions.conf, line 229: enable_allowlist (value: false).                 Your indexes and inputs configurations are not internally consistent. For more information, run 'splunk btool check --debug'         Done         Checking default conf files for edits...         Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.0.3-dd0128b1f8cd-linux-2.6-x86_64-manifest'         All installed files intact.         Done All preliminary checks passed.   Starting splunk server daemon (splunkd)... Done
I am having issue finding a way to standardize email for a query that will make the output "First Last" to a new field.  there are mainly two email types in "first.x.last@domain.com" or "first.las... See more...
I am having issue finding a way to standardize email for a query that will make the output "First Last" to a new field.  there are mainly two email types in "first.x.last@domain.com" or "first.last@domain.com" The first works for "first.x.last@domain.com":  | makeresults | eval name="first.x.last@domain.com" | rex field=name "^(?<Name>[^@]+)" | eval tmp=split(Name,".") | eval tmp2=split(Name,".") | eval FullName=mvindex(tmp,0) | eval FName=mvindex(tmp2,2) | table FullName FName | eval newName=mvappend(FullName,FName) | eval FN=mvjoin(newName, " ") | table FN  And this for "first.last@domain.com" | makeresults | eval name="first.last@domain.com" | rex field=name "^(?<Name>[^@]+)" | eval tmp=split(Name,".") | eval FullName=mvindex(tmp,0,1) | eval FN=mvjoin(FullName, " ") | table FN Any recommendations of how to accomplish getting an output of "First Last" to one field for both email types?