All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

After ePO is upgraded to the latest version, a new SQL database is created in addition to the existing ePO database (ePO_Servername) with the format: ePO_Servername_Events . I am getting the resul... See more...
After ePO is upgraded to the latest version, a new SQL database is created in addition to the existing ePO database (ePO_Servername) with the format: ePO_Servername_Events . I am getting the results of the SQL query successfully when I chose ePO_servername database as it seems to have all the tables required for join operation as below. • EPOEvents • EPOLeafNode • EPOProdPropsView_VIRUSCAN • EPOComputerProperties • EPOEventFilterDesc However, the DBA says I should use ePO_servername_Events database, but when I use this DB, all I get is the EPOEvents table. I am sure there might be few here who have had to go through this same problem. Can you please share which DB you chose or how you made this work ? Thanks in advance.
I have two Inputs, One is dropdown which specifies the type of File Incoming or Outgoing and another is Radio button which has three levels of SLA like Met, Warn, Breach. I want to display different ... See more...
I have two Inputs, One is dropdown which specifies the type of File Incoming or Outgoing and another is Radio button which has three levels of SLA like Met, Warn, Breach. I want to display different panels based on different combinations of file type that can be selected using dropdown and level of SLA. How to check the values set by both tokens Change condition using match.
HI All , i have a dashboard with 8 panels running in 58 seconds. I am getting data one hour and panel are auto refreshing once in ahour . Can summary indexing help me in improving it .by any chance.
Hello! I’m working on streaming telemetry data to Splunk. I use Splunk Universal Forwarder v7 x86_64 to capture and stream data to Splunk Enterprise 8. I use the script:// to capture data and r... See more...
Hello! I’m working on streaming telemetry data to Splunk. I use Splunk Universal Forwarder v7 x86_64 to capture and stream data to Splunk Enterprise 8. I use the script:// to capture data and run them at certain specified intervals. The data is being successfully streamed to the server. But, intermittently, splunkd (SUF) crashes, and I see the following error in my splunkd.log. 06-02-2020 17:12:27.975 -0700 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 06-02-2020 17:12:27.993 -0700 INFO WatchedFile - Will begin reading at offset=1182 for file='/opt/splunkforwarder/var/log/splunk/splunkd-utility.log'. 06-02-2020 17:12:56.832 -0700 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views 06-02-2020 17:30:37.696 -0700 WARN TailReader - Could not send data to output queue (parsingQueue), retrying... 06-02-2020 17:53:37.315 -0700 ERROR ProcessRunner - Error from ProcessRunner helper process: ERROR - Failed opening "": No such file or directory 06-02-2020 17:53:37.316 -0700 ERROR ProcessRunner - Error from ProcessRunner helper process: terminate called after throwing an instance of 'EventLoopException' 06-02-2020 17:53:37.316 -0700 ERROR ProcessRunner - Error from ProcessRunner helper process: what(): Main Thread: about to throw an EventLoopException: error from EventLoop poll: No such file or directory 06-02-2020 17:53:37.676 -0700 FATAL ProcessRunner - Unexpected EOF from process runner child! I have tried to grok through Splunk answers and on Google; but, I couldn’t find much documentation/articles on what file ProcessRunner was trying to open? Could someone help me or point me to the right channel to understand how I can fix this issue. Here’s my inputs.conf ’s script stanzas: [script://$SPLUNK_HOME/bin/scripts/<script-one>.py] source = source-one sourcetype = source-one [script://$SPLUNK_HOME/bin/scripts/<script-two>.path] source = source-two sourcetype = source-two interval = 60 [script://$SPLUNK_HOME/bin/scripts/<script-three>.path] source = source-three sourcetype = source-three interval = 1800 [script://$SPLUNK_HOME/bin/scripts/<script-four>.path] source = source-four sourcetype = source-four interval = 1800 Thank you!
I have two Inputs, One is dropdown which specifies the type of File Incoming or Outgoing and another is Radio button which has three levels of SLA like Met, Warn, Breach. I want to display panel base... See more...
I have two Inputs, One is dropdown which specifies the type of File Incoming or Outgoing and another is Radio button which has three levels of SLA like Met, Warn, Breach. I want to display panel based on file type that can be selected using dropdown and level of SLA. How to check the values set by both tokens Change condition.
Hi, I have two different indexes where I need to match a field and if true, return another field. First Search (Index1) FileName DeviceName explorer.exe myserver.test.com processo... See more...
Hi, I have two different indexes where I need to match a field and if true, return another field. First Search (Index1) FileName DeviceName explorer.exe myserver.test.com processor.dll anothersystem.xyz.abc third.exe yetanother.aaa.bbb another.exe myserver.test.com Second search (Index2) HostName Owner MYserver.test.com bob@sample.com nonEXistent.abc.ccc larry@sample.com yetANOTHER.aaa.bbb charlie@sample.com Desired search result DeviceName FileName Owner myserver.test.com explorer.exe bob@sample.com another.exe yetanother.aaa.bbb third.exe charlie@sample.com Couple of things to notice I need to show results where DeviceName and HostName match. Both fields may be in different case (so case insensitive matching is required) If DeviceName==HostName, I need the Owner field returned from Index2 One DeviceName/HostName may have many FileNames under it and I need to display all (explorer.exe + another.exe) I've been tinkering around and am having a hard time finding the right query. Here's where I'm at. (index=index1 sourcetype=type1 FileName=somecondition*) OR (index=index2 sourcetype=type2) | fields FileName, DeviceName, Owner, HostName | eval magic=case(DeviceName==HostName, Owner) | stats list(FileName) as FileName, list(magic) as SysOwner by DeviceName Although it doesn't work. I tried variations of the eval statement using if , coalesce and a few other solutions from other questions. But I believe the case difference between the two fields is what is hindering me. I'm still new to Splunk and any help would be appreciated!
Dear All, I have two columns Id and relationalId below is the sample of it. Id CorrelationalId 1 2 2 3 3 4 i am looking to get as an outpu... See more...
Dear All, I have two columns Id and relationalId below is the sample of it. Id CorrelationalId 1 2 2 3 3 4 i am looking to get as an output RelatedCorrelationalId 1 2 3 4 Please can someone guide me on the above issue. Regards, Santosh
Hi, I am looking for an email in KPI ad-hoc search which is supposed to arrive at -7-15am. if it doesn't arrive at 07:15 i want KPI threshold to go amber and if i don't receive it till 07:45(30 mins ... See more...
Hi, I am looking for an email in KPI ad-hoc search which is supposed to arrive at -7-15am. if it doesn't arrive at 07:15 i want KPI threshold to go amber and if i don't receive it till 07:45(30 mins later) I want my threshold value to turn red. Only when the email arrives values changes to green. Any help appreciated. Thanks
Hi everyone, Is it possible to add a thread feed on Splunk Enterprise, specifically for InfoSec App? There is no Splunk ES deployed. Thanks, Crizelle
We have a relatively small Splunk implementation - just 1 standalone server. We're downloading Cisco Umbrella logs from the Cisco-managed S3 Bucket for reporting purposes. We now have the need to... See more...
We have a relatively small Splunk implementation - just 1 standalone server. We're downloading Cisco Umbrella logs from the Cisco-managed S3 Bucket for reporting purposes. We now have the need to also forward those umbrella logs to a syslog server in addition to leaving them on the standalone for reporting. Is there a way to configure a standalone to forward to a syslog server?
I am able to list all fields/columns from the index however I only want to list a few and not all (*) I cannot seem to find a way to restrict the display of some columns. Is there a way to limit th... See more...
I am able to list all fields/columns from the index however I only want to list a few and not all (*) I cannot seem to find a way to restrict the display of some columns. Is there a way to limit the list in a drop-down? My current search is: sourcetype=iis | fieldsummary | table field
Hello, I am trying to set _time from a given stanza that occurs after the sourcetype stanza is forced. I am using a generic or catch-all sourcetype stanza initially to receive data from the H... See more...
Hello, I am trying to set _time from a given stanza that occurs after the sourcetype stanza is forced. I am using a generic or catch-all sourcetype stanza initially to receive data from the HTTP event collector and then force the events to their appropriate sourcetype from transforms. This sourcetype forcing works perfectly and assigns to the correct source type 2, 3, 4 below, as expected. Now, I want to be able to set the _time to the value that comes from the time_prefix that is defined within each forced stanza, but this is not working. The _time is always being set to any time_prefix in the first stanza ( sourcetype_1 ), or if I don't specify a time_prefix in the first stanza, Splunk will still assign a _time based on the default time rules. Is there any way to have the _time set within each forced sourcetype stanza, or will it only be able to work form the first stanza at index time? I could probably create my own datetime.xml, but I was hoping to use the forcing of the sourcetype and have the _time value set within each forced sourcetype stanza. Below is an example: [sourcetype_1 catch all] Do not assign _time here Force sourcetypes 2-4 and have _time assigned in those stanza's [sourcetype_2] Want _time set here based on time_prefix. [sourcetype_3] Want _time set here based on time_prefix. [sourcetype_4] Want _time set here based on time_prefix
I have a requirement to push a subset of universal and heavy forwarders originating data to a third party, for which I enabled a set of HFs for data forwarding alone. This is working fine, as data ... See more...
I have a requirement to push a subset of universal and heavy forwarders originating data to a third party, for which I enabled a set of HFs for data forwarding alone. This is working fine, as data arrives uncooked to a target syslog-ng . The troublesome part was being asked to ensure the HF resends the data in case the target undergoes maintenance, or has an outage lasting up to 2 days. Considering Persistent Queues don't work over splunktcp streams, is it even an option for me to push uncooked data to the HFs, enabling a standard TCP input (not splunktcp ) with Persistent Queue enabled, say, to 200GB? Never heard of anyone using this approach. Would this work?
We have set up a clustered Splunk enterprise environment, and we have recently seen multiple scheduled searches getting skipped, with the ratio being observed varying from 80% to 99%. Upon scrollin... See more...
We have set up a clustered Splunk enterprise environment, and we have recently seen multiple scheduled searches getting skipped, with the ratio being observed varying from 80% to 99%. Upon scrolling through the forum, we have observed that we can modify these values via limits.conf file. My question is to confirm whether changing the concurrency value or search allocation quota for scheduled searches is advised as a best practice, or would that have some repercussions in the long run? Also, what could be considered as a quick remedial action to mitigate this issue?
When we launch Splunk Home or Search page, there is this metadata that runs in real-time eating up our resources available at hand. |metadata type=sourcetypes | search totalcount>0 I have read... See more...
When we launch Splunk Home or Search page, there is this metadata that runs in real-time eating up our resources available at hand. |metadata type=sourcetypes | search totalcount>0 I have read other answers here in the forum that say it can be disabled by changing the config at ui-prefs.conf . My question is: What would be any repercussions of performing this tweak - is this search actually being used somewhere else? Actually, what is the purpose/benefit/importance of this search?
These rows have a field that begins and ends with a quote, but have different meanings between the backslashes. 1st and 2nd rows are: 'Server_Name\Instance_Name' from 'vmpit-ugzcg8xk\MSSQLSERVE... See more...
These rows have a field that begins and ends with a quote, but have different meanings between the backslashes. 1st and 2nd rows are: 'Server_Name\Instance_Name' from 'vmpit-ugzcg8xk\MSSQLSERVER' from 'vmpit-ugzcg8xk.lm.lmig.com\MSSQLSERVER' 3rd and 4th rows are: 'AOAG_Name\Server_Name\Instance_Name' from 'rbrk_ag1\vmpit-ugzcg8xk\MSSQLSERVER' from 'rbrk_ag1\vmpit-ugzcg8xk.lm.lmig.com\MSSQLSERVER' I need to be able to have a rex command that finds Server_Name , Instance_Name , and AOAG_Name from these 4 rows ( AOAG_Name would not have a value in the rows where it is not applicable). My 'old' rex command before the data changed was: | rex field=_raw "from [\'](?[^\']\w+-\w+)" This is probably pretty easy for someone who is good with rex , but I am not and have not yet figured out how to do it. Would anyone be able to help with this?
I have this search that produces a table with has a column that lists the number of segments to a schedule. The table is shown below I want to filter on the maximum number of segments (either 2 o... See more...
I have this search that produces a table with has a column that lists the number of segments to a schedule. The table is shown below I want to filter on the maximum number of segments (either 2 or 3). This is the query: ...search | table purchCostReference, eventType, Time, Segments, Carriers, BillingMethod, Origin, Destination, StopOffLocation | stats max(Segments) as TotalSegments by purchCostReference, eventType | search TotalSegments = 2 |sort Time I can use the max method to get the maximum number of segments and use the where to filter on the number of segments that I need but not all of the data is returned, only the columns that I used for the max function and I don't want the column TotalSegments displayed. I want to return only the rows that have 2 segments and not have an additional column of TotalSegments.
I am using the SDK to create my first custom search command. I'm using the Splunk Free version to test it out. It works great for relatively small numbers of records (10-50). For larger record coun... See more...
I am using the SDK to create my first custom search command. I'm using the Splunk Free version to test it out. It works great for relatively small numbers of records (10-50). For larger record counts (100+) about 20 records are processed before I see this in search.log: 06-02-2020 15:53:18.427 WARN SearchResultWorkUnit - timed out, sending keepalive nConsecutiveKeepalive=0 currentSetStart=1591052862.000000 06-02-2020 15:53:18.427 INFO ResultsCollationProcessor - just a keepalive, ignoring 06-02-2020 15:53:28.428 INFO TimelineCreator - Commit timeline at cursor=2147483647.000000 06-02-2020 15:53:28.429 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 06-02-2020 15:53:38.430 INFO TimelineCreator - Commit timeline at cursor=2147483647.000000 06-02-2020 15:53:38.430 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW This block repeats until my maxwait setting is reached. Then I see this block. 06-02-2020 15:57:48.524 WARN NetUtils - select_for timeout hit waiting for read 06-02-2020 15:57:48.524 WARN NetUtils - readWithTimeout failed. ret=-2 06-02-2020 15:57:48.524 ERROR ChunkedExternProcessor - Error or timeout while attempting to read transport header (Resource temporarily unavailable) 06-02-2020 15:57:48.679 ERROR ChunkedExternProcessor - Error in 'urlstatus' command: Invalid message received from external search command during search, see search.log. 06-02-2020 15:57:48.679 ERROR LocalCollector - sid: Error in 'urlstatus' command: Invalid message received from external search command during search, see search.log. 06-02-2020 15:57:48.679 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.680 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.680 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.680 ERROR DispatchThread - sid:1591127547.502 Search results might be incomplete: the search process on the local peer:CL2 ended prematurely. Check the local peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. 06-02-2020 15:57:48.680 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.680 INFO ReducePhaseExecutor - Ending phase_1 06-02-2020 15:57:48.680 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.680 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.680 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.680 ERROR SearchOrchestrator - Phase_1 failed due to : Error in 'urlstatus' command: Invalid message received from external search command during search, see search.log. 06-02-2020 15:57:48.680 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL 06-02-2020 15:57:48.680 INFO DispatchExecutor - User applied action=CANCEL while status=0 06-02-2020 15:57:48.680 ERROR SearchStatusEnforcer - sid:1591127547.502 Error in 'urlstatus' command: Invalid message received from external search command during search, see search.log. 06-02-2020 15:57:48.680 INFO SearchStatusEnforcer - State changed to FAILED due to: Error in 'urlstatus' command: Invalid message received from external search command during search, see search.log. 06-02-2020 15:57:48.684 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.684 INFO DispatchManager - DispatchManager::dispatchHasFinished(id='1591127547.502', username='admin') 06-02-2020 15:57:48.684 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.685 INFO ISearchOperator - 0x7f348fb51d00 PREAD_HISTOGRAM: usec_1_8=21 usec_8_64=0 usec_64_512=0 usec_512_4096=0 usec_4096_32768=0 usec_32768_262144=0 usec_262144_INF=0 06-02-2020 15:57:48.687 INFO UserManager - Unwound user context: NULL -> NULL 06-02-2020 15:57:48.687 INFO LookupProviderFactory - Clearing out lookup shared provider map 06-02-2020 15:57:48.689 ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: Error in 'urlstatus' command: Invalid message received from external search command during search, see search.log. Can someone explain what a SearchResultWorkUnit is and why it would timeout? And why would the keepalive be ignored? Is there a setting I can change somewhere?
I have a distributed Splunk environment running in Azure IaaS. I need to start rolling my cold data off to archive and it looks like our best option is going to be blob storage. I have found plenty o... See more...
I have a distributed Splunk environment running in Azure IaaS. I need to start rolling my cold data off to archive and it looks like our best option is going to be blob storage. I have found plenty of information on how to do this if it was AWS and S3 however, Microsoft Azure's blob doesn't support S3. I have found a tool called AZcopy that looks like it might be part of the puzzle? Is anyone currently doing this and if so do you have a script you could sanitize and let me look at to get a starting point?
Hi,folks. I trying timechart the average duration but the I'm not get the average values for all spa's of times. The query is like this: " (index=a) OR (index=b) |transaction Reg_ID|search ev... See more...
Hi,folks. I trying timechart the average duration but the I'm not get the average values for all spa's of times. The query is like this: " (index=a) OR (index=b) |transaction Reg_ID|search eventcount=2 |bin _time span=1m |timechart avg(duration) as media (DATE RANGE 15 MIN) But it only show the result for 5 min,for example . Even when I make the average with the stats sum and c. I can clarify it if more with you need. Tks for help!