All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, How can I send an empty schedule report (no events in the search)?  I need to send a schedule report (daily) from an alert but sometimes there are no results. They need to see that csv report... See more...
Hi, How can I send an empty schedule report (no events in the search)?  I need to send a schedule report (daily) from an alert but sometimes there are no results. They need to see that csv report even is empty, but the visualization won't appear if there are no results. Did you know how can I do that? Just the table visualization with empty results/values. The fillnull don't work for this or am I using it wrong?   Thanks!
I am trying to build a dashboard with time input, how can I use the time selected to pass to below query?   | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.No... See more...
I am trying to build a dashboard with time input, how can I use the time selected to pass to below query?   | tstats `summariesonly` earliest(_time) as _time from datamodel=Incident_Management.Notable_Events_Meta by source,Notable_Events_Meta.rule_id | `drop_dm_object_name("Notable_Events_Meta")` | `get_correlations` | stats count by rule_name   e.g. if I select 7 days, it should show data for 7 days only.
I'm currently building an app with Splunk SOAR and somehow the upload functionality for wheel files seems to be broken. I tested with Chrome, Firefox and Internet Explorer all with the same result: ... See more...
I'm currently building an app with Splunk SOAR and somehow the upload functionality for wheel files seems to be broken. I tested with Chrome, Firefox and Internet Explorer all with the same result: As you can see, there is no "Upload" button or any field to drop the wheel file. My Splunk SOAR version is: 5.2.1.78411
Hi everyone, I have a client wondering if (NCR ATMs) are certified by Splunk to install UF and to receive logs, the client wants to confirm this with a trusted source
Hello. I am in problem. I have  log like this.   1.example.log 2022/08/24 12:04:00,ExampreA,"xxx"xx"xxx"xxxx"xxx"xxxx"xxxxx"   I'd like to replace 「"」 with blank when  transferring logs to ... See more...
Hello. I am in problem. I have  log like this.   1.example.log 2022/08/24 12:04:00,ExampreA,"xxx"xx"xxx"xxxx"xxx"xxxx"xxxxx"   I'd like to replace 「"」 with blank when  transferring logs to Indexer and I'd like to keep the first and last「"」.   I tried edit  config file「props.conf」(Indexer).   #props.conf [sourcetype value] SEDCMD-replacespaces = y/"/ /   Result is that all「"」were replaced 「 」. I want to to capture log like this.(↓)   1.example.log 2022/08/24 12:04:00,ExampreA,"xxx xx xxx xxxx xxx xxxx xxxxx"   Please Any advise.
Hi, Could I collect "https" using Jira issues Collector add-on ?  http was collected very well, but it is not collected after changing to https. Thanks.
Hi, I have below log file, I would like to build a table out of it (Line1, Line2,Line3,Line4 are just for understanding) Line1: 2022-05-22 02:02:20 PM UTC False [Android] Password Expiration Noti... See more...
Hi, I have below log file, I would like to build a table out of it (Line1, Line2,Line3,Line4 are just for understanding) Line1: 2022-05-22 02:02:20 PM UTC False [Android] Password Expiration Notice Line2: 2022-05-22 06:05:49 PM UTC True [Home] [Android] Password Expiration Notice Line3: 2022-05-29 04:24:52 AM UTC False [Android] High Memory usage Google Line4: 2022-05-29 06:05:49 PM UTC True [Android] Password Expiration Notice Desired Table: Issue                                                             True      False Password Expiration Notice                     2          0 High Memory usage Google                     0           1   Caluclating False: Line1-Line2 i.e. i need to Subtract count of events with "True [Home]" from "False" Caluclation True: Number of events with "True"
Hello, How would I extract field/value pairs from these sample events (2 sample events given below)?  I can use like ......ID : (?P<ID>\w+)........but are there any good ways to get these key/val... See more...
Hello, How would I extract field/value pairs from these sample events (2 sample events given below)?  I can use like ......ID : (?P<ID>\w+)........but are there any good ways to get these key/value pairs. Thank you so much, would appreciate your support. 23:51:43.670 |LogMessage ID : sxntest ClientAddress : 10.207.68.172 Level : 6 EventType : UserLogging Resource: RESTIP EventStatus : Success CEvent : No Category : TestEvent ComID : VMREST CorrelationID : DetailsInfo : Login App ID: DSTest Cluster ID: Node ID: XP2SENTAtPCBUC1 23:51:43.789 |LogMessage ID : sxntest ClientAddress : 10.207.68.175 Level : 7 EventType : UserLogging Resource: RESTIP EventStatus : Success CEvent : No Category : TestEvent ComID : VMREST CorrelationID : DetailsInfo : Login App ID: DSTest Cluster ID: 09XV4R Node ID: XP2SENTXRTPCBUC
ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - EvtDC::connectToDC: DsBind failed: (5) We have 22 out of 3000+ hosts send... See more...
ERROR ExecProcessor - message from ""C:\Program Files\SplunkUniversalForwarder\bin\splunk-winevtlog.exe"" splunk-winevtlog - EvtDC::connectToDC: DsBind failed: (5) We have 22 out of 3000+ hosts sending thousands of errors for this and I can't seem to figure out why. My best guess at this point is the forwarders need to be updated.  We have a distributed environment with multiple DC's.  Any idea if I'm doing something wrong on my end, or do I need to have these forwarders that are causing errors fixed? I have things set up as follows: All Windows hosts Universal Forwarders - inputs.conf - [default] evt_resolve_ad_obj = 0 Domain Controller UF inputs - [admon://DefaultTargetDC] targetDc = 'DC02' startingNode = LDAP://OU=Computers,DC=ad index = msad monitorSubtree = 1 disabled = 0 baseline = 0 evt_resolve_ad_obj = 1 [admon://SecondTargetDC] targetDc = 'DC03' startingNode = LDAP://OU=Computers,DC=ad index = msad monitorSubtree = 1 disabled = 1 baseline = 0 evt_resolve_ad_obj = 0 [admon://ThirdTargetDC] targetDc = 'DC01' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://FourthTargetDC] targetDc = 'DC02' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://FifthTargetDC] targetDc = 'DC01' startingNode = LDAP://OU=Computers,DC=adu disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://FifthTargetDC] targetDc = 'DC01dev' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://SixthTargetDC] targetDc = 'DC04' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://SeventhTargetDC] targetDc = 'DC05' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://EighthTargetDC] targetDc = 'DC06' startingNode = LDAP://OU=Computers,DC=ad disabled = 1 index = msad baseline = 0 evt_resolve_ad_obj = 0 [admon://NearestDC] disabled = 1 baseline = 0 evt_resolve_ad_obj = 0
Mission Control (MC), currently in Preview, is a security operations application from Splunk Security. When all features are released, it will unify capabilities from Splunk Core, Enterprise Security... See more...
Mission Control (MC), currently in Preview, is a security operations application from Splunk Security. When all features are released, it will unify capabilities from Splunk Core, Enterprise Security, SOAR, and Threat Intelligence Management.  This release of Mission Control, known as “Preview 1”, provides the initial infrastructure for the app, the ingestion framework to move ES incidents into MC, and the framework for response capabilities via a feature called Response Templates.  It is a great way to see how the product is shaping up and what the fundamentals of Mission Control look like! Check out the docs here for more info! https://docs.splunk.com/Documentation/MC/Preview/   Note that the app is currently in an early release preview. 
I have a distributed environment with a search head (with deployment server) and indexer. I SSH into the search head and create the app to deployed, configure inputs.conf with a barebones config (u... See more...
I have a distributed environment with a search head (with deployment server) and indexer. I SSH into the search head and create the app to deployed, configure inputs.conf with a barebones config (using vi/vim), and deploy to a universal forwarder set up on a windows server. It deploys the app, but when I view the deployed inputs.conf on the windows server, all line breaks have been removed and it is all compressed into a single line. I tried using the commands :set enc=utf8 in vim, but it had no effect. Any thoughts on what I can do to get the config in a windows recognizable format? Thanks!
I have created a custom add-on using the Splunk Add-on Builder app, which is running in an on-premise instance of Splunk Enterprise. The add-on utilizes a few REST API data input configurations that ... See more...
I have created a custom add-on using the Splunk Add-on Builder app, which is running in an on-premise instance of Splunk Enterprise. The add-on utilizes a few REST API data input configurations that make calls to another vendor's product and pulls back specific data we're interested in. When I test the input inside the test pane of the add-on builder, it returns all expected events. I can also test the same REST API call outside of Splunk and it similarly returns all expected events. When I package and upload the add-on to our Splunk Cloud instance, however, the same data input only pulls back 60 events instead of the full amount (~250). Other data inputs within the add-on that are hitting the same REST API are able to pull back more than 60 events, so the limitation appears to be exclusive to this one data input, which again, does not have the same limit when run in the add-on builder or outside of Splunk entirely. Does anyone know why there would be a difference in behavior when run in our Cloud environment or where I might be able to find logs to help me answer that question?
Hello, I have created a datamodel which I have accelerated, containing two sourcetype. The goal is to add a field from one sourcetype into the primary results. The challenge I have been having is r... See more...
Hello, I have created a datamodel which I have accelerated, containing two sourcetype. The goal is to add a field from one sourcetype into the primary results. The challenge I have been having is returning all the data from the Vulnerability sourcetype, which contains over 400K events. I have attempted several different searchs, none of which are the results i'm expecting. Basically I want to join the two tstats below without using join do to its limitations. The vulnerabilies should include the hostname and operating system.  Any help would be apprectiated     |tstats summariesonly=f ``` latest(tenable.plugin_name) as "Vulnerability", ``` values(tenable.first_found) as "First Found", values(tenable.last_found) as "Last Found", values(tenable.risk_factor) as "Risk Factor", values(tenable.in_the_news) as "In The News?", values(tenable.vpr_score) as "VPR", values(tenable.solution) as "Solution", values(tenable.see_also) as "See Also", values(tenable.state) as "State" values(tenable.exploitability_ease) as "Exploitable Ease", values(tenable.exploit_available) as "Exploit Available" values(tenable.ip) as IP ``` latest(tenable.asset_hostname) as hostname``` FROM datamodel="VulnMgt.tenable", WHERE sourcetype="tenable:io:vuln" by tenable.asset_uuid tenable.asset_hostname tenable.plugin_name |tstats summariesonly=t prestats=t append=t values(devices.operating_systems) as OS FROM datamodel="VulnMgt.tenable", WHERE sourcetype="tenable:io:assets" by tenable.asset_uuid tenable.hostnames | stats latest(*) as * count as total by tenable.asset_uuid      
Hi, Can it be possible to extract one common field if we have two sourcetypes and sourcepath is also different in them , index is same. Example : sourcetype : abc with source path : /home/mysqld/... See more...
Hi, Can it be possible to extract one common field if we have two sourcetypes and sourcepath is also different in them , index is same. Example : sourcetype : abc with source path : /home/mysqld/$DB_NAME/audit/audit.log sourcetype:xyz with sourcepath : /mydata/log/$DB_NAME/audit/audit.log I need to have DBname extracted is that possible to get it via regex if yes what it can be. Also if not can i make soucretype as one with 2 different sourcepath /home/mysqld/$DB_NAME/audit/audit.log and /mydata/log/$DB_NAME/audit/audit.log  and then extract DBname from it via regex?  
I am new to Splunk so please forgive me for what I do not know.  We are getting events with start=1661359208771 and need to covert it to a readable timestamp.  I have tried changing the below Timesta... See more...
I am new to Splunk so please forgive me for what I do not know.  We are getting events with start=1661359208771 and need to covert it to a readable timestamp.  I have tried changing the below Timestamp format and prefix with no luck.  Any suggestions?
Hi all! I'm trying to create a Timechart showing only the graph bars where the number of events is 2X the number of events from the previous 10 minutes.    E.g. if I have 10,000 events at 10:10... See more...
Hi all! I'm trying to create a Timechart showing only the graph bars where the number of events is 2X the number of events from the previous 10 minutes.    E.g. if I have 10,000 events at 10:10 AM to 10:20 AM and 30,000 at 10:20 AM -10:30 AM then 35,000 at 10:30 AM to 10:40 AM   I want the timechart to show only the bar for 10:20-10:30 period, which is where the surge happened.   Hope that makes sense, thanks in advance!
Good afternoon! We receive messages on splunk. The task is as follows: there is a time period between the first message and the second, and also between the second and the third. The task is that y... See more...
Good afternoon! We receive messages on splunk. The task is as follows: there is a time period between the first message and the second, and also between the second and the third. The task is that you need to somehow calculate the delta between these intervals and display it on the dashboard. This is real? And the question is, how can I do this? Is there a rough example? Unfortunately, I have not worked with splunk at all before, so I don’t even know where to start. If you need leading questions, I'm ready to answer.
Hello I would like to create multiple new custom data source categories to use them in a Partner Integration app on Splunk Security Essentials. I already read this documentation, then I was able to... See more...
Hello I would like to create multiple new custom data source categories to use them in a Partner Integration app on Splunk Security Essentials. I already read this documentation, then I was able to create a single new custom data source category. However, when trying to create multiple custom data source categories by changing the "company_name" of other security contents, there were no updates to the existing data source categories. Therefore, they were not created and only the first data source category that I had created continued to appear. Finally, I noticed the following snippet in the SSE documentation in the "Populating Data Inventory" section: "[...] it will take any detections that have a create_data_inventory=true configuration. For the first piece of content that it finds, it will add a new item to data_inventory output [...]". And then I was in doubt if the app is really programmed to create only a new data source category informed, not creating the others, after having created the first.   So I have the following questions: 1. Is it possible to create multiple new custom data source categories? 2. How could I create them?
Hello, "The ingestion certificates on xxxx Splunk Cloud environment xxx Universal Forwarder certificate package, will be expiring on x/xx/2022. In order to ensure that ingestion is not disrupted, w... See more...
Hello, "The ingestion certificates on xxxx Splunk Cloud environment xxx Universal Forwarder certificate package, will be expiring on x/xx/2022. In order to ensure that ingestion is not disrupted, we have rolled out an updated Universal Forwarder (UF) package to your customer’s Splunk Cloud Platform stack. The operational contacts have been informed of this information via xxxx. They will need to install this updated package on all forwarders connecting to their Splunk Cloud Stack as soon as possible. We are asking you to please reach out to your customer and verify they are aware that they are responsible for rolling out this package and should do so immediately." I have received a message from splunk and I would like you to please confirm if what I must do is related to this link https://docs.splunk.com/Documentation/Forwarder/9.0.1/Forwarder/ConfigSCUFCredentials?ref=hk#HowtoforwarddatatoSplunkCloud#How_to_forward_data_to_Splunk_Cloud      
We are trying to audit/monitor administrative activity to Splunk.  Is there some canned dashboards or searches that can be used to monitor/review elevated privilege activity?  How do we monitor chang... See more...
We are trying to audit/monitor administrative activity to Splunk.  Is there some canned dashboards or searches that can be used to monitor/review elevated privilege activity?  How do we monitor change management on Splunk itself?