All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I followed https://answers.splunk.com/answers/672614/how-to-configure-multiple-drivers-for-database-in.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev But it's n... See more...
I followed https://answers.splunk.com/answers/672614/how-to-configure-multiple-drivers-for-database-in.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev But it's not working for me. I installed latest version of db_connect(3.3) and JRE-11. I am trying to use both Oracle 11 and 12 drivers. I created local db_connection_types.conf and places both ORJDBC11,12 under splunk_app_db_connect>drivers but splunk is picking only higher version of it. Any ideas?
Something to ponder while working from home... I am planning on storing and managing my config files in Git. We recently ran into a few confusions managing our props files where our support team... See more...
Something to ponder while working from home... I am planning on storing and managing my config files in Git. We recently ran into a few confusions managing our props files where our support teams got confused about the same props file (containing extracts and line breaking) getting deployed on search heads and on indexers. So I thought I would come up with a convention that aligns to splunks phases. As per below... <company>_search_<app> search app for user dashboards and reports (not to be held in git at present) <company>_data_<app> (field extractsion, calculated fields) <company>_parse_<app> (props and transforms for line breaking, timestamping etc) <deployment>_<p|t>_<app>_<sub_component> (inputs, outputs etc) very much environment specific Does anyone else worry about this stuff like I seem to and have a suggestion? Mike
Hello plp, I have this problem, i need to extract 2 fields of this event. [14/04/2020 16:17:49][INFO][http-8080-36][ar.xxx.xxx.xx]:116 - (3;;57AF476E9DDAF14CA60BA1E589C55CF8) Usuario: UserName Op... See more...
Hello plp, I have this problem, i need to extract 2 fields of this event. [14/04/2020 16:17:49][INFO][http-8080-36][ar.xxx.xxx.xx]:116 - (3;;57AF476E9DDAF14CA60BA1E589C55CF8) Usuario: UserName Operador: OperadorName EstadoOperadorToken: Token Activo i need a regex that capture the string after "Usuario" and "Operador" Any one can help me_
I have XML files in my PC that I want to index them in Splunk, I need the inputs and the props.conf changed. I did everything but I am stock into line breaking events. I have it in unix time forma... See more...
I have XML files in my PC that I want to index them in Splunk, I need the inputs and the props.conf changed. I did everything but I am stock into line breaking events. I have it in unix time format which is something like: <Date_range> <begin>1586965192</begin> <end>1586965199</end> </Date_range> How to specify the time format !!? and, I have no idea how to specify time format in my props.conf or transform.conf. if u have anything might help, please!!
Hi , I looked the daily ingestion for an index i am seeing total data ingested in last 7 days to an index is 800 GB. When i am calculating the total raw data size its showing total raw data ingest... See more...
Hi , I looked the daily ingestion for an index i am seeing total data ingested in last 7 days to an index is 800 GB. When i am calculating the total raw data size its showing total raw data ingested 1626 GB and its compressed to 759 GB which is at 46%. I am not understanding if i ingested 800 GB in last 7 days how come the raw total size data came to 1626 GB ? Any inputs will be appreciated. Query using for compression: | dbinspect index=xyz | fields state,id,rawSize,sizeOnDiskMB | stats sum(rawSize) AS rawTotal, sum(sizeOnDiskMB) AS diskTotalinMB |eval diskTotalinGB=(diskTotalinMB/1024) | eval rawTotalinGB=(rawTotal / 1024 / 1024 / 1024) | fields - rawTotal | eval compression=tostring(round(diskTotalinGB / rawTotalinGB * 100, 2)) + "%" | table rawTotalinGB, diskTotalinGB, compression Result: rawTotalinGB diskTotalinGB compression 1626.19525605347 759.39445495605 46.70% Query used to calculate daily ingestion : index=_internal source="license_usage.log" type=Usage idx=xyz| eval yearmonthday=strftime(_time, "%Y-%m-%d") | eval yearmonth=strftime(_time, "%Y-%m-%d") | stats sum(eval(b/1024/1024/1024)) AS volume_b by idx yearmonthday yearmonth | chart sum(volume_b) over yearmonth by idx|addcoltotals. Which gives me total 862 GB ingestion in last 7 days.
I was wondering if anyone had a good solution for a proper source type for dmesg? Or failing that some way of handling the fact it is different than most other logs in that entries aren't always sin... See more...
I was wondering if anyone had a good solution for a proper source type for dmesg? Or failing that some way of handling the fact it is different than most other logs in that entries aren't always single lines, and the timestamps are relative to system boot. That makes it difficult for the indexers to assign a time stamp for the entries.
Hello everyone, is it possible to send traffic from AWS Traffic Mirror (AWS VPC feature) directly to the Splunk Cloud? Or the mandatory approach is to have AWS instances that will act as a Stream... See more...
Hello everyone, is it possible to send traffic from AWS Traffic Mirror (AWS VPC feature) directly to the Splunk Cloud? Or the mandatory approach is to have AWS instances that will act as a Stream forwarders. Thanks!!!
Hi, I have two text columns finding_id and device manufacturer, and a count of events containing both. I'd like a scatter chart of device.manufacturer on the y-axis, and finding_id on the x-axi... See more...
Hi, I have two text columns finding_id and device manufacturer, and a count of events containing both. I'd like a scatter chart of device.manufacturer on the y-axis, and finding_id on the x-axis, but everything seems to revert to a numerical axis? Am I missing something? The below is from the stats page: count finding_id device.manufacturer 9 V-3086 Cisco 9 V-3034 Cisco 9 V-14717 Cisco 9 V-14667 Cisco 8 V-5618 Cisco
Hi, I have vulnerability scanner that scans all device on our network every day. The agent of vulnerability scanner is on all endpoints being scanned. When an endpoint is offline or being rebooted,... See more...
Hi, I have vulnerability scanner that scans all device on our network every day. The agent of vulnerability scanner is on all endpoints being scanned. When an endpoint is offline or being rebooted, it misses the scan and does not appear in scan so does not appear in Splunk. What I need is, I need a Splunk search that tells me the status of endpoint being online/offline by using above data. For example, is it possible to compare yesterday's data when endpoint appeared on scan vs today when an endpoint did not appear in scan and show results as below?
I have set a default custom time to between last week and this week Thursday, however the latest time is not being reflected correctly in the token myTimepickerUnixLatest. Instead, the myTimepickerUn... See more...
I have set a default custom time to between last week and this week Thursday, however the latest time is not being reflected correctly in the token myTimepickerUnixLatest. Instead, the myTimepickerUnixLatest token ends up being the exact same as myTimepickerUnixEarliest. I am not sure where I am going wrong, I believe the fault is lying in relative_time(now(),'latest'). <input type="time" token="time"> <label></label> <default> <earliest>@w4</earliest> <latest>+1w@w4</latest> </default> <change> <eval token="myTimepickerUnixEarliest">if(isnum('earliest'),'earliest',relative_time(now(),'earliest'))</eval> <eval token="myTimepickerUnixLatest">if(isnum('latest'),'latest',relative_time(now(),'latest'))</eval> <eval token="myTimepickerEarliest">strftime(myTimepickerUnixEarliest, "%B %d %Y %H:%M:%S")</eval> <eval token="myTimepickerLatest">strftime(myTimepickerUnixLatest, "%B %d %Y %H:%M:%S")</eval> </change> </input>
I have a new Splunk deployment with a multi-site index cluster. I currently have setup heavy forwarders using indexer discovery and assigning them to the primary site. In my DMC all health checks and... See more...
I have a new Splunk deployment with a multi-site index cluster. I currently have setup heavy forwarders using indexer discovery and assigning them to the primary site. In my DMC all health checks and index cluster status look good, and we as the index cluster status when looking on the master. In splunkd.log on the index peers and master, I have no errors. I have setup an ssl input on the index cluster and do not have a non-ssl input enabled. I have configured the heavy forwarders output.conf to useSSL. To keep things simple right now, I am not requiring a client cert in the indexer's input.conf. The problem I am seeing is in the heavy forwarder's splunkd.log, and it states: Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group {{ redacted }} has been blocked for 30 seconds I have verified connectivity to the master and index peers from the heavy forwarders and have verified connectivity to the input port on the index peers from the heavy forwarders. Any thoughts?
Hi Splunkers , any advice how to avoid mixng values in assets by entitymerge command? I have 5 fileds marked as Multivalue (category, ip, mac) and sometimes it cause trouble like multiple asse... See more...
Hi Splunkers , any advice how to avoid mixng values in assets by entitymerge command? I have 5 fileds marked as Multivalue (category, ip, mac) and sometimes it cause trouble like multiple assets mapped to all existing categories or multiple IP addresses, and thus incorrect notable events. Do I understand correctly that I should mark fields as Multivalue if there might be several valuse (like 2 mac addresses for computer or several categories (router|network) ? SE V7.3.3, Enterprise Security Version 6.0.1 Build 2
Hi, In our Production environment we have one Search head and two indexers, all the instance are running in Splunk Version 7.1.1. Recently we faced a issue in one of our indexer like DB_Inputs aut... See more...
Hi, In our Production environment we have one Search head and two indexers, all the instance are running in Splunk Version 7.1.1. Recently we faced a issue in one of our indexer like DB_Inputs automatically removed from the Splunk DB Connect App 2.4.0 Version. As our Vendor suggested we upgraded the DB Connect APP to 3.1.4. But now we are facing a different issue, whenever the Splunk DB Connect app 3.1.4 is enabled the Internal logs are not updating? is there any limitation? kindly suggest?
In the page "Access Control > Users", most users got "system" in the "default app inherited from" column, but someone got "user", and another one got a blank in that column. What does it mean? H... See more...
In the page "Access Control > Users", most users got "system" in the "default app inherited from" column, but someone got "user", and another one got a blank in that column. What does it mean? How can I edit it?
Hi all, I'm trying to split Windows events into different indexes at index time depending on the host which is sending them. Below there are my props.conf and transforms.conf props.conf: [... See more...
Hi all, I'm trying to split Windows events into different indexes at index time depending on the host which is sending them. Below there are my props.conf and transforms.conf props.conf: [WMI:WinEventLog:Security] TRANSFORMS-set_new_index = set_index_new transforms.conf [set_index_new] REGEX = MY.HOSTNAME.12.COM FORMAT = windows-new DEST_KEY = _MetaData:Index I tried with different combinations on the regex but none of them worked. Can someone tell me what could be wrong? Thanks in advance. Best.
Installation instructions do not mention anything specific to using this Git Version Control for Splunk app in a Search Head cluster setup. Is that supported and if so are there any specific things t... See more...
Installation instructions do not mention anything specific to using this Git Version Control for Splunk app in a Search Head cluster setup. Is that supported and if so are there any specific things to keep in mind?
Currently we are running with Splunk Cloud 7.2.9.1 version the same applicable for indexers ,cluster master and search heads. So we have recently build a heavy forwarder server so that can i go ah... See more...
Currently we are running with Splunk Cloud 7.2.9.1 version the same applicable for indexers ,cluster master and search heads. So we have recently build a heavy forwarder server so that can i go ahead and install the latest version i.e. 8.0.3 ? So i want to know will it be compatible to 7.2.9.1? and will the data ingestion wont be disturbed? Or should i need to install the Heavy Forwarder package with older version i.e. nearer to 7.3 something? Kindly let me know.
I'm trying ot add a bit of jazzyness to my simple_xml form and I wanted to put some colour on a couple of html eliments. I know I can add a style sheet / add style tag but I wanted to use the brand ... See more...
I'm trying ot add a bit of jazzyness to my simple_xml form and I wanted to put some colour on a couple of html eliments. I know I can add a style sheet / add style tag but I wanted to use the brand names so it changes as the tool does (Eg brandColorD30). I thought it might be possible like in bootstrap by adding it to the class by this didn't work. Any tips would be helpfull.
I've been reading into how to filter incoming events between the Parsing and Indexing stages of our data pipeline and found a Splunk doc (https://docs.splunk.com/Documentation/SplunkCloud/8.0.2001/Fo... See more...
I've been reading into how to filter incoming events between the Parsing and Indexing stages of our data pipeline and found a Splunk doc (https://docs.splunk.com/Documentation/SplunkCloud/8.0.2001/Forwarding/Routeandfilterdatad) which suggests it should be possible to do this by just adding a props.conf and transforms.conf at the indexer: I have been through and added in what I thought would be valid entries in the conf files, as follows: props.conf [source::IIS_Exchange] TRANSFORMS-set= setnull,ExchangeParsing transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [ExchangeParsing] REGEX = /(-\s401\s1\s1[39][02][69])/ DEST_KEY = queue FORMAT = indexQueue The regex should be monitoring for HTTP 401 errors with a sub-status of 1 and a pair of event error codes we're monitoring for (1326 & 1909), for example: 401 1 1909 9 401 1 1326 67 I've tested the regex in both Notepad++ and on RegExr.com and it looks like it should work. We use a Universal Forwarder to send the data into the indexer, which is just trying to monitor standard IIS logs on a load balanced server pair; the inputs.conf for the relevant source is as follows: [monitor://G:\inetpub\logs\LogFiles\W3SVC1*.log] index = mar source_type = iis disabled = false recursive = true source = IIS_Exchange I've restarted Splunk on the indexer for this to take effect but nothing appears to be happening. I had expected that if the 2nd stanza in the transforms.conf was incorrect we'd get no data at all from the source, as it would all initially be filtered to the nullQueue. This suggests to me that the transform isn't applying to the incoming data at all but I cannot fathom why. Can anybody please tell me what I'm doing wrong?
I have a query that uses map and subsearch inside map command as below: host="X" booking source="Y" Success | dedup ID | table ID | map maxsearches=10 search="search host="X" source="... See more...
I have a query that uses map and subsearch inside map command as below: host="X" booking source="Y" Success | dedup ID | table ID | map maxsearches=10 search="search host="X" source="Y" $ID$ | stats range(_time) as "booking time"|table ID "booking time""* I'm trying to get ID field from main search and run map subsearch with variable ID field. In main search I'm looking for events with success and parse IDs. In subsearch i'm trying to eval time between first and last occurrence of ID field. I expect to have results in table format like below: ID "booking time" 3345 867.34 2245 665.7 etc. but I failed. What I'm doing wrong? Thanks in advance.