All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Greetings, Can I set the clientName in deploymentclient.conf through the CLI? This has been asked before several times.   in 2012 https://community.splunk.com/t5/Deployment-Architecture/Set-deploy... See more...
Greetings, Can I set the clientName in deploymentclient.conf through the CLI? This has been asked before several times.   in 2012 https://community.splunk.com/t5/Deployment-Architecture/Set-deployment-client-name-in-CLI/td-p/33613 2016 https://community.splunk.com/t5/Getting-Data-In/Can-I-set-the-clientName-in-deploymentclient-conf-through-the/m-p/227222 I don't see anything jumping out at me here, but I could be wrong. https://docs.splunk.com/Documentation/Forwarder/8.0.5/Forwarder/SupportedCLIcommands So I thought I would ask again to confirm. The use case for it would be remote customers looking to onboard their universal forwarder on their machine to our deployment server, pretty straightforward.  Isn't asking them to manually alter their .conf file bad practice?   Thanks
Hello,  How can I extract debitsksvrvru7 from this query: sndb(1p_debitsksvrvru/-363877568/localhost_debitsksvrvru7)pass:len=152 field name= Description thank you
One of our customer is looking to setup a SOC with a SIEM solution and they want to monitor and manage multiple PCI zones across multiple geographies. Customer is governed by multiple data regulatio... See more...
One of our customer is looking to setup a SOC with a SIEM solution and they want to monitor and manage multiple PCI zones across multiple geographies. Customer is governed by multiple data regulations across various regions and countries and don't want PCI data to traverse to one centralized SOC location; however looking to design a cost effective and PCI compliant SOC. I am sure this is not an unique situation and there will be existing customers and Splunk installations similar to this requirement. My question is:  How can we implement Spluck in such environment (Architecture and components) to manage multiple PCI zones across geographies, showing PCI compliance and ensure PCI data is managed locally to meet local data compliance. PCI logs are managed in decentralized manner; however overall SOC is managed centralized without breaching any data regulations. This is my first question is the forum and look forward to get some advice, help from other members.
Hello. Trying to resolve an issue with routing log events.  The goal is to route log events with an "Api" keyword to a separate index.  Here is a log sample of three events (the first has no "Api" e... See more...
Hello. Trying to resolve an issue with routing log events.  The goal is to route log events with an "Api" keyword to a separate index.  Here is a log sample of three events (the first has no "Api" embedded, the next two have "Api"): 2020-08-12 23:04:24 W3SVC5 SERVER_1 XX.XX.XX.XXX GET / - 443 - XX.XX.XX.XX HTTP/0.9 - - - - 302 0 0 389 7 10 2020-08-12 23:04:24 W3SVC5 SERVER_1 XX.XX.XX.XXX GET /Api/TopicsUpdate/GetRecalculationServiceTopicsThatMustBeUpdated pageSize=1 443 system.service XX.XX.XX.XX HTTP/1.1 - - - XX.XX.XX.XX 200 0 0 597 175 44 2020-08-12 23:04:22 W3SVC5 SERVER_1 XX.XX.XX.XXX GET /Api/TopicsUpdate/GetRecalculationServiceTopicsThatMustBeUpdated pageSize=1 443 system.service XX.XX.XX.XX HTTP/1.1 - - - XX.XX.XX.XX 200 0 0 597 175 54 Here is the props.conf files: [sourcetype1] TRANSFORMS-set = sourcetype_web_rename_iis,sourcetype_api_rename_iis,web_index_rename_iis,api_index_rename_iis Here is the transforms.conf: [sourcetype_web_rename_iis] REGEX = \d+\-\d+\-\d+\s\d+\:\d+\:\d+\s\w+\s\w+\s\d+.\d+\.\d+\.\d+\s\w+\s\/(?!Api) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::ms:iis [sourcetype_api_rename_iis] REGEX = \d+\-\d+\-\d+\s\d+\:\d+\:\d+\s\w+\s\w+\s\d+.\d+\.\d+\.\d+\s\w+\s\/Api\/ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::ms:iis [web_index_rename_iis] REGEX = \d+\-\d+\-\d+\s\d+\:\d+\:\d+\s\w+\s\w+\s\d+.\d+\.\d+\.\d+\s\w+\s\/(?!Api) DEST_KEY = _MetaData:Index FORMAT = index1 [api_index_rename_iis] REGEX = \d+\-\d+\-\d+\s\d+\:\d+\:\d+\s\w+\s\w+\s\d+.\d+\.\d+\.\d+\s\w+\s\/Api\/ DEST_KEY = _MetaData:Index FORMAT = index2 Are there any special considerations when using (?!...)? Regards, Max
Hi, We have disabled [distributedSearch] in out splunk cluster's master and indexer nodes. With this we are seeing below issues WARN in Master:   WARN DistributedPeerManager - Cannot determine a... See more...
Hi, We have disabled [distributedSearch] in out splunk cluster's master and indexer nodes. With this we are seeing below issues WARN in Master:   WARN DistributedPeerManager - Cannot determine a latest common bundle, search may be blocked   ERROR in Indexers:   SearchPeerBundlesSetup - Cannot find bundles for search peer: <master_ip_node>   What we tried? Enabled dist search in master alone(not on indexers) - both issues are gone Enabled dist search on all indexers alone(not on master) - Can still see both the issues Made an update to one of the apps and did apply-bundle - This is successful without any issues So the solution seems to be enabling dist search on master.  But wanted to get more insight into this. What does dist search mean on master and indexers? Does master node needs to have dist search enabled? In 'Cannot determine a latest common bundle' and 'Cannot find bundles for search peer' what does bundle mean here? I'm sure these are not knowledge bundles.  Why indexer is treating master as search peer? 
Hello, I am new to splunk so I will try to be as clear as possible. I wanted to test the visualization of networkx graphs in Splunk 3D Graph Network Topology App. I was able to load the csv file of ... See more...
Hello, I am new to splunk so I will try to be as clear as possible. I wanted to test the visualization of networkx graphs in Splunk 3D Graph Network Topology App. I was able to load the csv file of the graph successfully and I can see the data and the graph visualization. However, when I run community detection algorithm, it shows me the following error:  Unknown search command: 'fit' Can somebody help me fix the issue please, thanks.
One of my new colleagues was working on a lookup in a splunk app and seems to have somehow made a lookup table unavailable. His task involved creating and uploading a lookup csv, creating a lookup ... See more...
One of my new colleagues was working on a lookup in a splunk app and seems to have somehow made a lookup table unavailable. His task involved creating and uploading a lookup csv, creating a lookup definition and creating an automatic lookup. No one else has been modifying this Splunk App. An existing lookup started no longer populating in a dashboard. I looked at the “datasets” and the lookup csv exists and shows a last modified date from way before this issue started, but no rows display when I click on it. I’m at a loss for what could have caused this or how to fix this. It seems the data still exists, as the modified date hasn’t changed, but somehow has become inaccessible.    
How can Splunk Connect for Syslog be deployed in a way that reduces/avoids lost events?
Hello, I am working on getting the logs into a dashboard. Files are sitting in the source a 2 minutes and will be moved to another server after 2 minutes time frame. my concern is if there is a scri... See more...
Hello, I am working on getting the logs into a dashboard. Files are sitting in the source a 2 minutes and will be moved to another server after 2 minutes time frame. my concern is if there is a script pull the logs within the time frame. Thanks in advance   Henry
Hello all, I'm trying to put together a dashboard that - among other things - compares the success rate of various transactions over the last hour with the same hour a week ago.  My base search resul... See more...
Hello all, I'm trying to put together a dashboard that - among other things - compares the success rate of various transactions over the last hour with the same hour a week ago.  My base search results in rows that have two fields I particularly care about: event_name and event_status. My desired outcome would look something like this: event_name Last hour Last Week event1 95% 96% event2 85% 41% event3 72% 100% event4 25% 69%   Here is the current query i have, which seems to basically work. <base query> earliest=-169h@h latest=now | fields + event_name, event_status, _time | fields - _raw | eval weekAgoHour = relative_time(now(), "-168h@h") | eval lastHour = relative_time(now(), "-1h@h") | eval ReportKey = "omit" | eval ReportKey = case(_time < weekAgoHour,"Last Week", _time > lastHour,"Last Hour") | where ReportKey != "omit" | eventstats count(eval(event_status=="FAILED")) as FailedCount, count(eval(event_status=="SUCCESS")) as SuccessCount by event_name, ReportKey | eval pctSuccess = round(SuccessCount/(SuccessCount+FailedCount)*100, 1)."%" | chart values(pctSuccess) by event_name, ReportKey   The problem here is that it has to look at hundreds of millions of irrelevant rows ... everything that has happened in the last 169 hours.  Surely there must be a more efficient way to do this?  Maybe with multisearch?
Why is this happening and is there a way to fix it? (1. Sparkline not filling the width of the column it's supposed to be in - often significantly narrower than the column; 2. Sparkline width inconsi... See more...
Why is this happening and is there a way to fix it? (1. Sparkline not filling the width of the column it's supposed to be in - often significantly narrower than the column; 2. Sparkline width inconsistent between similar searches when only the timeframe is changed.) Sample search (from "Add sparklines to search results"):   index=_internal | chart sparkline count by sourcetype   (Tested and reproduced in Splunk versions 7.2.3, 8.0.1, 8.04.1, 8.05, multiple versions of Chrome and Safari.) P.S. The issue is also evident in Splunk's own screenshots, e.g. in "Create a report from a sparkline chart" and "Add sparklines to search results":
Hi, ive successfully blacklisted the windows event 4658 with this line_ blacklist2 = $XmlRegex="<EventID>4658<\/EventID>.*<Data Name='ProcessName'>[C-F]:\\Windows\\System32\\CpqMgmt\\cqmghost\\cqmgh... See more...
Hi, ive successfully blacklisted the windows event 4658 with this line_ blacklist2 = $XmlRegex="<EventID>4658<\/EventID>.*<Data Name='ProcessName'>[C-F]:\\Windows\\System32\\CpqMgmt\\cqmghost\\cqmghost.exe" ive tried to do the same for event 4656 blacklist1 = $XmlRegex="<EventID>4656<\/EventID>.*<Data Name='ProcessName'>[C-F]:\\Windows\\System32\\CpqMgmt\\cqmghost\\cqmghost.exe"   but isn't working. Any ideas?   inputs.conf:   [WinEventLog://Security] disabled = 0 index = winevents whitelist1 = 1102,4616,4647,4656-4658,4660,4663,4670,4672 whitelist2 = 4673,4674,4698-4702,4704,4705,4715,4719,4720 whitelist3 = 4722,4725,4726,4732,4733,4735,4738-4740,4767 whitelist3 = 4779,5140,5145 blacklist1 = $XmlRegex="<EventID>4656<\/EventID>.*<Data Name='ProcessName'>[C]:\\Windows\\System32\\CpqMgmt\\cqmghost\\cqmghost.exe" blacklist2 = $XmlRegex="<EventID>4658<\/EventID>.*<Data Name='ProcessName'>[C-F]:\\Windows\\System32\\CpqMgmt\\cqmghost\\cqmghost.exe"   Raw event example <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/><EventID>4656</EventID><Version>1</Version><Level>0</Level><Task>12801</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2020-08-12T19:47:25.544399300Z'/><EventRecordID>1397935969</EventRecordID><Correlation/><Execution ProcessID='716' ThreadID='728'/><Channel>Security</Channel><Computer>svr-apl-cit-01.BANCOREGIONAL.LOCAL</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='SubjectUserName'>SVR-APL-CIT-01$</Data><Data Name='SubjectDomainName'>BANCOREGIONAL</Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='ObjectServer'>Security</Data><Data Name='ObjectType'>Key</Data><Data Name='ObjectName'>\REGISTRY\MACHINE\SYSTEM\ControlSet001\Services\SamSs</Data><Data Name='HandleId'>0x584</Data><Data Name='TransactionId'>{00000000-0000-0000-0000-000000000000}</Data><Data Name='AccessList'>%%1537 %%1538 %%1539 %%1540 %%4432 %%4433 %%4434 %%4435 %%4436 %%4437 </Data><Data Name='AccessReason'>-</Data><Data Name='AccessMask'>0xf003f</Data><Data Name='PrivilegeList'>-</Data><Data Name='RestrictedSidCount'>0</Data><Data Name='ProcessId'>0x1ec0</Data><Data Name='ProcessName'>C:\Windows\System32\CpqMgmt\cqmghost\cqmghost.exe</Data><Data Name='ResourceAttributes'>-</Data></EventData></Event> Thanks in advance.
I've a scenario where we have 300 windows  servers, due to few reason we are not able to install splunk forwarder. The alternative thought is of  using WMI. Since we've linux server as an splunk en... See more...
I've a scenario where we have 300 windows  servers, due to few reason we are not able to install splunk forwarder. The alternative thought is of  using WMI. Since we've linux server as an splunk enterprise. We are anticipating to use an windows server as an intermediate forwarder (UF) which will collect logs from the target server WMI providers and an universal forwarder will forward it to Linux splunk enterprise. Do I need to get the logs in splunk universal forwarder inputs.conf using powershell command Ex:   [powershell://CollectProcessInfoFromWmi] script = Get-CimInstance Win32_Process | Select-Object Field1, Field2, Field3 schedule = 0 */5 * ? * * sourcetype = Windows:MyWmiData Or  Will this path in inputs.conf will do same? [script://$SPLUNK_HOME\bin\scripts\splunk-wmi.path] disabled = 0 What is the purpose of splunk-wmi.exe, will it be present with universal forwarder? Apart from this, just placing wmi.conf in its appropriate place will manage target servers and queries to thier logs? Considering windows specific configurations are done (domain acc from AD,firewall & permissions) What are installation steps  for UF to work  with WMI? Are their any complexities I'm missing if we use WMI approach? And how reliable it could be to use WMI against 300servers generating more than 30GB of daily data to be indexed? As I came to know this uses some polling mechanism as against to UFs which push the data. I've worked with UFs only in the past. Network traffic or intermediate forwarder as a bottleneck,Could these be a risks?      
Hi Splunkers, I have created a dashboard and I hard coded the hostname. This has become an issue because I have multiple systems and in order to reuse the dashboard I have to manually change the hos... See more...
Hi Splunkers, I have created a dashboard and I hard coded the hostname. This has become an issue because I have multiple systems and in order to reuse the dashboard I have to manually change the hostname  in each of my visualization  inside the dash board. Is there a way to make hostname a variable and let the user input which hostname they would like to use? EX: List of hostnames: S0W1SANDBOX S0W1PRODUCTION S0W1 WAREHOUSE   host=S0W1PRODUCTION   Thank you!
Hello, I have Splunk Cloud 90-day searchable retention set for all indexes by default. I created a new index with only 2-day retention (intentional). The index filled with data as intended. But d... See more...
Hello, I have Splunk Cloud 90-day searchable retention set for all indexes by default. I created a new index with only 2-day retention (intentional). The index filled with data as intended. But data older than 2 days did not get deleted. The index continues to grow regardless of the "Searchable Retention = 2 days" configuration. What's up with that? This is a new Splunk Cloud environment, although at v7.2.10.1. From the 'Data Quality' Monitoring Console, I see the data is currently in 6 buckets and I have 1,730,000 events in the index. 1.2 GB of data. Any advice on why this is happening would be appreciated.
We want XML based logs over Non-XML logs, but we are seeing both for some reason. Moreover, if we look at the log messages with source=WinEventLog:Security for example, the sourcetype shows 'xmlwinev... See more...
We want XML based logs over Non-XML logs, but we are seeing both for some reason. Moreover, if we look at the log messages with source=WinEventLog:Security for example, the sourcetype shows 'xmlwineventlog'. Is this normal/expected behavior or is there some additional tuning we need to do?
Hi all, I am trying to extract an IP and the word "HOST_NAME" from a raw log file using the following regex expression:  source="/var/tmp/test.log" | rex field=_raw "(?<HOST_NAME>) \b(?:[0-9]{1,3}\... See more...
Hi all, I am trying to extract an IP and the word "HOST_NAME" from a raw log file using the following regex expression:  source="/var/tmp/test.log" | rex field=_raw "(?<HOST_NAME>) \b(?:[0-9]{1,3}\.){3}[0-9]{1,3}\b"   Log file: EXEC_ID: HOST_NAME: 172.19.20.60 USER_NAME: test ================================  TestCaseRunner Summary ----------------------------- Time Taken: 13844ms Total TestSuites: 2 Total TestCases: 6 (0 failed) Total TestSteps: 16 Total Request Assertions: 19 Total Failed Assertions: 0 Total Exported Results: 0   The search results are not extracting the HOST_NAME field and the respective IP. Please suggest what should I change.  Thank you   
Hello All, I'm utilizing the Splunk App for AWS to capture data and represent them into easily identifiable dashboards. I'm working on the Security Groups Dashboard under Security. I'm having troub... See more...
Hello All, I'm utilizing the Splunk App for AWS to capture data and represent them into easily identifiable dashboards. I'm working on the Security Groups Dashboard under Security. I'm having troubles with getting data into some of the panels: - I'm currently working on identifying how many Security Group Rules there are within the whole environment. - I'm having issues with trying to pull fields that could help me out with this issue - I've tried to go into the AWS console and look at the syntax structure for some of the variables but, to no avail, I'm not able to pull any useful data. Any help would be greatly appreciated. Thank you so much
index=xxxx source="/esbplogsdir/prod/Enable/LOG_Maximo_LSI_Work/Maximo/LSI_IN_msg_prod.log" OR source="/esbplogsdir/prod/WS/LOG_Maximo_SmallWorld_IPICustomerInfo/Maximo/SmallWorld_IN_msg_prod.log" | ... See more...
index=xxxx source="/esbplogsdir/prod/Enable/LOG_Maximo_LSI_Work/Maximo/LSI_IN_msg_prod.log" OR source="/esbplogsdir/prod/WS/LOG_Maximo_SmallWorld_IPICustomerInfo/Maximo/SmallWorld_IN_msg_prod.log" | rex "(?i).*? \\- (?P<FIELDNAME>[a-f0-9]+\\-[a-f0-9]+\\-[a-f0-9]+\\-[a-f0-9]+\\-[a-f0-9]+)(?= )" | rex "(?i):.*? \\- (?P<FIELDNAME>\\d+\\.\\d+)(?= )" | search "[ERROR]" OR "failed" |stats dc(FIELDNAME) as ERROR_TRANSACTION_COUNT by source | rename source as SOURCE   This is the search, but if one or both sources have a 0 return, I want a line that lists the log file and a 0 to show in my table.  How can I do this?
My current search is:   index=rtm* source=/prod/msp/logs/private-auto-loan-credit* | regex "The rule (?<field1>[a-zA-Z0-9]+_[a-zA-Z0-9]+)_(?<field2>[a-zA-Z0-9]+) with" | table field1, field2   In... See more...
My current search is:   index=rtm* source=/prod/msp/logs/private-auto-loan-credit* | regex "The rule (?<field1>[a-zA-Z0-9]+_[a-zA-Z0-9]+)_(?<field2>[a-zA-Z0-9]+) with" | table field1, field2   In verbose mode, it finds the correct entries, but my table is full of nulls. What am I doing wrong?