All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First step in debugging such stuff is to run two commands splunk list monitor and splunk list inputstatus But as far as I remember Splunk has problems with monitor inputs overlapping the same dir... See more...
First step in debugging such stuff is to run two commands splunk list monitor and splunk list inputstatus But as far as I remember Splunk has problems with monitor inputs overlapping the same directories. You could instead just monitor whole directory with a whitelist of all four types of files and then dynamically rewrite sourcetype on ingest depending on the file path included in the source field. But yes, it can cause issues with multiple significantly different sourcetypes (especially if they differ in timestamp format/placement).
This is not a trivial task since Splunk does not record when each KO is used. Some are easy to determine - scheduled searches, reports, and alerts, for example. You should be able to use the audit ... See more...
This is not a trivial task since Splunk does not record when each KO is used. Some are easy to determine - scheduled searches, reports, and alerts, for example. You should be able to use the audit log to find uses of dashboards and unscheduled saved searches. Others, like macros, aliases, and tags will be more challenging.  It will require parsing every executed search (find them in _audit) and identifying the KOs in each. That will produce a list of *used* KOs.  From that, you can derive a list of unused objects.
@yuanliuSo this section of the props.conf spec MAX_EVENTS = <integer> * The maximum number of input lines to add to any event. * Splunk software breaks after it reads the specified number of lines. ... See more...
@yuanliuSo this section of the props.conf spec MAX_EVENTS = <integer> * The maximum number of input lines to add to any event. * Splunk software breaks after it reads the specified number of lines. * Default: 256 takes precedence over the LINE_BREAKER?
@PickleRick  we have cloud deployment and i see only two panels in ingest , i want data by per sc4s host not splunk server.   
I had a similar problem and the answer is in Line breaking.  See Why are REST API receivers/simple breaks input unexpectedly in Getting Data In.
Version 9.2.2 was released on July 1, 2024 That means it's supported "fully" for 24 months since release date and for 36 additional moths at P3 level. https://www.splunk.com/en_us/legal/splunk-soft... See more...
Version 9.2.2 was released on July 1, 2024 That means it's supported "fully" for 24 months since release date and for 36 additional moths at P3 level. https://www.splunk.com/en_us/legal/splunk-software-support-policy.html
What OS version are you running of red hat? 
These were the recent error messages. Also, I am loading LLM in bin folder. Is that an issue? Following is the output I can see when I ran the .py file in the terminal   splunkd.log 07-30-2... See more...
These were the recent error messages. Also, I am loading LLM in bin folder. Is that an issue? Following is the output I can see when I ran the .py file in the terminal   splunkd.log 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimGroup 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimGroup 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimGroups 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimGroups 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimUser 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimUser 07-30-2024 18:23:29.862 -0500 ERROR RegisterPackageHandler [3255548 MainThread] - Failed to get destination for endpoint: tlPackage-scimUsers 07-30-2024 18:23:29.862 -0500 ERROR LocalProxyRestHandler [3255548 MainThread] - destination not found for tlPackage-scimUsers 07-30-2024 18:23:29.998 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: reading standard input 07-30-2024 18:23:29.998 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: 2024/07/30 18:23:30 [begin] SIGUSR1 handler 07-30-2024 18:23:30.002 -0500 ERROR SidecarThread [3255548 MainThread] - <stderr> Sidecar: 2024/07/30 18:23:30 Supervisor logs printed at : /opt/splunk/var/log/splunk 07-30-2024 18:23:30.098 -0500 ERROR NoahHeartbeat [3255712 SplunkdSpecificInitThread] - event=deleteReceipts Not supported, since no remote queue is configured in inputs.conf 07-30-2024 18:23:30.098 -0500 ERROR NoahHeartbeat [3255712 SplunkdSpecificInitThread] - event=deleteReceipts message="Unable to load smartbus conf" 07-30-2024 18:23:30.110 -0500 ERROR loader [3255846 datalakeinput] - Couldn't find library for: datalakeinputprocessor 07-30-2024 18:23:30.110 -0500 ERROR pipeline [3255846 datalakeinput] - Couldn't find library for: datalakeinputprocessor 07-30-2024 18:23:30.110 -0500 ERROR PipelineComponent [3255548 MainThread] - The pipeline datalakeinput threw an exception during initialize 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimGroup 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimGroup 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimGroups 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimGroups 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimUser 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimUser 07-30-2024 18:23:35.350 -0500 ERROR RegisterPackageHandler [3255554 HTTPDispatch] - Failed to get destination for endpoint: tlPackage-scimUsers 07-30-2024 18:23:35.350 -0500 ERROR LocalProxyRestHandler [3255554 HTTPDispatch] - destination not found for tlPackage-scimUsers
Check the searches from the license report and adjust to your needs.
It's not very common but it lets you avoid escaping yourself to death
No. It's not about the order of operations. It's about search-time vs. index-time. REPORT and EXTRACT are two operations that are done on the event in search time - when the event is being read fro... See more...
No. It's not about the order of operations. It's about search-time vs. index-time. REPORT and EXTRACT are two operations that are done on the event in search time - when the event is being read from the index and processed before presenting to the user. INGEST_EVAL is an operation which is done in index-time - when the event is initially received from the source and before it's written to the index. You search-time operations are not performed in index-time (and vice-versa). So regardless of whether you define your search-time operations inline or with transform (in other words - as REPORT or EXTRACT), they are not active in index-time. You can only operate on indexed fields with INGEST_EVAL. So if you want to extract a part of your event in order to use it in INGEST_EVAL, you have to first extract it with TRANSFORM as indexed field (if you don't need it stored later, you can afterwards rewrite it with another INGEST_EVAL to null()).
That makes sense, the docs mentioned the order of operations but sometimes that doesnt sink in. It was easy enough to transition but I am still not seeing the field. I do see the fields as parsed fro... See more...
That makes sense, the docs mentioned the order of operations but sometimes that doesnt sink in. It was easy enough to transition but I am still not seeing the field. I do see the fields as parsed from the props.conf- Props: [wsjtx_log] #REPORT-wsjtx_all = wsjtx_log EXTRACT-wsjtx = (?<year>\d{2})(?<month>\d{2})(?<day>\d{2})_(?<hour>\d{2})(?<min>\d{2})(?<sec>\d{2})\s+(?<freqMhz>\d+\.\d+)\s+(?<action>\w+)\s+(?<mode>\w+)\s+(?<rxDB>\d+|-\d+)\s+(?<timeOffset>-\d+\.\d+|\d+\.\d+)\s+(?<freqOffSet>\d+)\s+(?<remainder>.+) TRANSFORMS = add20 Transform: [wsjtx_log] #REGEX = (\d{2})(\d{2})(\d{2})_(\d{2})(\d{2})(\d{2})\s+(\d+\.\d+)\s+(\w+)\s+(\w+)\s+(\d+|-\d+)\s+(-\d+\.\d+|\d+\.\d+)\s+(\d+)\s+(.+) #FORMAT = year::$1 month::$2 day::$3 hour::$4 min::$5 sec::$6 freqMhz::$7 action::$8 mode::$9 rxDB::$10 timeOffset::$11 freqOffSet::$12 remainder::$13 [add20] INGEST_EVAL = fyear="20" . $year$ Fields: [fyear] INDEXED = TRUE
Hi Team, I have containerized sc4s hosts which have ufs installed  but sc4s is forwarding data via HEC, i want to see the total logging size per host or sc4s source, can someone help me with the que... See more...
Hi Team, I have containerized sc4s hosts which have ufs installed  but sc4s is forwarding data via HEC, i want to see the total logging size per host or sc4s source, can someone help me with the query to get that data .
Why was Windows Server 2016 removed from Splunk Universal Forwarder as of v9.3 (7/30/2024), when Windows Server 2016 is still under extended support until 2027?
I'm working with a 9.1.2 UF on Linux.  This is the props.conf   [stanza] # # Input-time operation on Forwarders # LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRUNCATE =... See more...
I'm working with a 9.1.2 UF on Linux.  This is the props.conf   [stanza] # # Input-time operation on Forwarders # LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false TRUNCATE = 999 DATETIME_CONFIG = CURRENT   This is the contents of the file   Splunk Reporting Hosts as of 07/31/2024 12:05:01 UTC host hostname1 hostname2 hostname3 hostname4 ... hostname1081   There are 1,083 lines in the file.  I used od -cx to verify there is \n at the end of each line.  For some reason, the last entry from a search consists of the first 257 lines from the file, and then the remaining lines are individual entries.  I didn't have DATETIME_CONFIG in the stanza, so I thought that might be the issue.  It is now, and it is still an issue.  I'm out of ideas.  Anyone see this before or have an idea on how to resolve this? TIA, Joe
Thanks for your speedy reply!  @gcusello  Splunk is running as root and is monitoring other files within /var/log for example /var/log/audit/audit.log  There is a specific IP I want to monitor /v... See more...
Thanks for your speedy reply!  @gcusello  Splunk is running as root and is monitoring other files within /var/log for example /var/log/audit/audit.log  There is a specific IP I want to monitor /var/log/syslog/192.168.1.1 and all subdirectories and files under it. My thought was this should work  [monitor:///var/log/syslog/192.168.1.1] disabled = false recursive = true index = insight   The index does exist and Splunk is running as root.   
Hi @lostcauz3 , this is a job for a Splunk Certified Architect, not for the Community and I'd avoid to design a distributed architecture with the low knowledge that you said to have. Anyway, there ... See more...
Hi @lostcauz3 , this is a job for a Splunk Certified Architect, not for the Community and I'd avoid to design a distributed architecture with the low knowledge that you said to have. Anyway, there are many information required: is HA required at data level (indexer Cluster)? is HA required at presentation level (Search Head Cluster)? do you need to use Premium apps as ES or ITSI? have you Universal Forwarders to manage? if yes, how many? what's the retention of your data? can you confirm 5 GB/day of daily indexed data? how many concurrent users do you think to have (1000 are probably too many!)? Anyway, you can find information about the validated Splunk architectures in the document https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf About hardware reference, you can see at https://docs.splunk.com/Documentation/Splunk/9.3.0/Capacity/Referencehardware Ciao. Giuseppe
For that, you use inputlookup.  It's simply the reverse logic.  I will use the same assumptions about your index search, with the same assumed field names.   | inputlookup roomlookup_buildings.csv ... See more...
For that, you use inputlookup.  It's simply the reverse logic.  I will use the same assumptions about your index search, with the same assumed field names.   | inputlookup roomlookup_buildings.csv where NOT [search index= buildings_core "Buildings updated in database*" | rex "REQUEST_UNIQUE_ID:(?<request_unique_id>[^ ]+)" | rex "Buildings updated in database:\s(?<buildings>\{[^}]+\})" | eval buildings = replace(buildings, "[{}]", "") | eval buildings = split(buildings, ",") | mvexpand buildings | eval building_from_search1 = mvindex(split(buildings, ":"), 1) | fields building_from_search1 | rename building_from_search1 as buildings] | rename buildings as buildings_only_in_lookup   Using the same emulation as shown above, the mock data would give buildings_only_in_lookup Antara Also a point that name buildiing_from_search1 (or building_from_index_search as in your latest comment).  Its value comes from an original field named "buildings" which is the same as in the lookup.  It is much easier to keep using that name on the left-hand side of the assignments because there don't appear to have use of the original value down the stream.
Hi @JoshuaJJ , at first: you're running Splunk as root or asplunk user? if splunk user, has this user the grants to read these files? then please try this: [monitor:///var/log/syslog/*/*/*/] disa... See more...
Hi @JoshuaJJ , at first: you're running Splunk as root or asplunk user? if splunk user, has this user the grants to read these files? then please try this: [monitor:///var/log/syslog/*/*/*/] disabled = false host_segment = 4 index = insght whitelist=secure|cron|message Ciao. Giuseppe
Hi, I'm trying to design a distributed architecture of Splunk for my company, and I need to pitch the design to them. I need to know the total number of servers required and each system's specificati... See more...
Hi, I'm trying to design a distributed architecture of Splunk for my company, and I need to pitch the design to them. I need to know the total number of servers required and each system's specifications.    How can I start with this? I have little knowledge of splunk admin parts mainly because I am a developer.   Users/day can be less than 1000 and the indexing volume should be around 5 GB/day.    Can anyone please recommend something where to start?