All Topics

Top

All Topics

Hi Team -  Need your expertise in Regex. The below is the rawlog i need to extract the Date and time  the only unique is the WORD "START" & "END" goal is to find the response time between START and... See more...
Hi Team -  Need your expertise in Regex. The below is the rawlog i need to extract the Date and time  the only unique is the WORD "START" & "END" goal is to find the response time between START and END in a Table format. Note: there are no space in the log START</enteringExiting><logLevel>INFO</logLevel><messageType>LOG</messageType><applicationName>GstarSOA</applicationName<programName>GstarRecipientService_MF</programName><functionName>GetRecipient</functionName><host>PerfNode0</host><messageDetails>2022-06-17 04:10:53/utility/logging"><enteringExiting>END</enteringExiting><logLevel>INFO</logLevel><messageType>LOG</messageType><applicationName>GstarSOA</applicationName><programName>GstarRecipientService_MF</programName<functionName>GetRecipient</functionName><host>PerfNode0</host><messageDetails>2022-06-17 04:10:53
We are about to open up a Splunk ticket for this issue, but figured we'd check with the community first. Problem: The tstats command is not seeing all of our indexed data and queries would suggest ... See more...
We are about to open up a Splunk ticket for this issue, but figured we'd check with the community first. Problem: The tstats command is not seeing all of our indexed data and queries would suggest that our Forwarders are not sending data, which isn't true. We've run multiple queries against the index confirming the expected data exists in the index and the fields are indexed. In addition, the hosts show up in the data summary for the index. We are searching within a timeline in which events do exist in the index, so it's not like we are searching for data that never existed. We even performed a restart of the Splunk service and noted a significant number of hosts' data in the index have stopped being processed by tstats / tsidx according to the timestamp of the latest event for the hosts. It coincides with the Splunk restart but never starts processing the data again to be visible by tstats, even after several hours. Other hosts data is processed as expected, so we have some hosts with current "lastSeen" times:     | tstats count max(_time) as lastSeen where index=windows_sec earliest=-20d@d latest=@m by host | convert ctime(lastSeen)     Command that results in missing hosts:     | tstats values(host) by index     Similar command that also results in same "missing" hosts --- Fast Mode:     index=* | stats values(host) by index     Modifying the above command from Fast to Verbose mode results in all hosts being displayed as expected. Additional Info: Splunk v8.2.6 - no correlation between different Forwarder versions either. Splunkd.log has been analyzed line by line pre/post Splunk service restart. No leads there. Tsidx reduction is (and always has been) disabled for all of our indexes. We have seen very similar behavior for other queries where Fast Mode results in missing data but simply changing the mode to Verbose instantly populates all expected data in the results. We even have verified that all fields are identified in the initial "generating" query - no difference in Fast Mode. This seems like a super basic issue but has completely baffled us for some time and is causing serious heartburn and lack of trust in the data being presented to users. It's almost like a caching issue of some sort but we are grasping at straws now. Any thoughts/ideas would be welcome. Thanks.
Hi All, I have a mv field with a bunch of different values. I want to learn how to pull specific values based on string criteria. For examle the multivalue field may contain "App: A;  sn_ubs;  O... See more...
Hi All, I have a mv field with a bunch of different values. I want to learn how to pull specific values based on string criteria. For examle the multivalue field may contain "App: A;  sn_ubs;  Owner_Bob; Criticality_3;" How would I create an eval to pull just the "sn_ubs" into a new field name SN? I am unsure of what manipulation does this. I have tried mvfilter and that works but doesn't break out the value.
Splunkers, I just updated my app db_connect. Now all my connections are broken. I think they are forcing ssl now and that has broken them. This is error that produces: The driver could not establi... See more...
Splunkers, I just updated my app db_connect. Now all my connections are broken. I think they are forcing ssl now and that has broken them. This is error that produces: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target". I tried setting the key value pair to encrypt=false. I then get this error and my Server team says it's no longer using Kerberos. Login failed for user 'SVCSplunkDBRead'. ClientConnectionId:5fb7a943-44bb-46ce-bf52-63a9c90643df Any advice on how to fix the issue would be super awesome! I don't think the server team is going to turn on SSL right now.    These are my local confs: inputs.conf [http://db-connect-http-input] disabled = 0 db_connection.conf [EEHProd] connection_type = generic_mssql_kerberos database = EnterpriseExceptionSystem disabled = 0 host = SQLSERVER identity = SplunkDBRead jdbcUseSSL = true localTimezoneConversionEnabled = false port = 1433 readonly = true timezone = America/Denver customizedJdbcUrl = jdbc:sqlserver://SQLSERVER:1433;databaseName=EnterpriseExceptionSystem;selectMethod=cursor;encrypt=true;MultiSubNetFailover=True identities.conf [SplunkDBRead] disabled = 0 domain_name = ipce password = somepassword use_win_auth = true username = SVCSplunkDBRead identity_type = normal  
When I add this case statement to my search, all results for Severity are "Other". What did I miss? | eval Severity=case(score>=0.1 AND score<=3.9, "Low", score>=4.0 AND score<=6.9, "Medium", sc... See more...
When I add this case statement to my search, all results for Severity are "Other". What did I miss? | eval Severity=case(score>=0.1 AND score<=3.9, "Low", score>=4.0 AND score<=6.9, "Medium", score>=7.0 AND score<=8.9, "High", score>=9.0 AND score<=10.0, "Critical", true(), "Other")
Hello gurus I'm trying to return a percentage from the results of sub searches. The value User_count and Device_count are numerical but the eval returns nothing. If I convert either of the values t... See more...
Hello gurus I'm trying to return a percentage from the results of sub searches. The value User_count and Device_count are numerical but the eval returns nothing. If I convert either of the values to a number and leave the other named the eval works. Could you offer a suggestion to make this search work please? Thank you! index="test" earliest=-2d@d latest=-1d@d | dedup User | stats count(User) as User_count | append [search index="test" | stats dc(SerialNumber) as Device_count] | eval perc=round(User_count/Device_count*100, 2)."%"
Newly released Splunk 9 introduced an error or invalid stanza on `federated.conf`. Anybody knows how to fix this?   Invalid key in stanza [provider:splunk] in /opt/splunk/etc/system/default/federat... See more...
Newly released Splunk 9 introduced an error or invalid stanza on `federated.conf`. Anybody knows how to fix this?   Invalid key in stanza [provider:splunk] in /opt/splunk/etc/system/default/federated.conf, line 20: mode (value: standard). Invalid key in stanza [general] in /opt/splunk/etc/system/default/federated.conf, line 23: needs_consent (value: true).            
Hello I am a bit confused here but I have a search that runs and creates a multivalue  field called "tags{}.name". This is a multivalue field pulled from JSON data. However when I then use the output... See more...
Hello I am a bit confused here but I have a search that runs and creates a multivalue  field called "tags{}.name". This is a multivalue field pulled from JSON data. However when I then use the output of that search in a different search the field is no longer Multivalue and breaks if I try to split it. I need to either make this field delimited or ensure it remains a multi value field. Any help? Search 1, Field is multivalue Search 2, Field is no longer multivalue after using lookup.      
Hi friends, Do you know what roles or capabilities do I need to set the action.email = true in splunk Cloud via script for an alert? I appreciate it, Thanks
Hello, I have a new installation of Splunk 9.X. The instance is hosted on Ubuntu OS, on Azure Cloud. There is no Public IP associated with the instance. The instance can only be accessed via the ass... See more...
Hello, I have a new installation of Splunk 9.X. The instance is hosted on Ubuntu OS, on Azure Cloud. There is no Public IP associated with the instance. The instance can only be accessed via the associated Private IP Address (peering is established between Azure and my internal company network) I tried to telnet the Instance on port 8000, and it is accepting connections. In parallel, when I launch TCPDUMP and refresh the browser, I can see packets on TCPDUMP. Inspite, I am unable to access the instance via console. I get CONNECTION RESET on the browser. Please advise. -- Thanks, Siddarth
Hi Splunk Community, I am having a problem with saved searches not saving the full results. I have a saved search that is supposed to be saving ~100k results. When I go to the saved searches tab an... See more...
Hi Splunk Community, I am having a problem with saved searches not saving the full results. I have a saved search that is supposed to be saving ~100k results. When I go to the saved searches tab and manually click run, it saves all 100k results. When it runs on its schedule, it only saves 50k.  Is there a setting I need to change in order for it to save all results? Thanks in advance!
I'm trying to do a little testing of our Splunk installation and I am looking at the Python SDK examples.  I've run the inputs.py script hoping it would get me a list of all the inputs defined in our... See more...
I'm trying to do a little testing of our Splunk installation and I am looking at the Python SDK examples.  I've run the inputs.py script hoping it would get me a list of all the inputs defined in our inputs.conf but it only seems to return a small subset.  I don't see any logic as to why it is returning the subset it does.  I get about 30 records back from inputs.py. I expect well over 100.   Running "splunk btool inputs list" gives me the set I expect, but I don't want to have to run this locally.   Is there any rules or restrictions to the Python SDK API to get the list of inputs?  Maybe some options missing in the sample to get the defined inputs list? Thanks Rob. 
index = "abc" required_field = "xx" | table date - gives me a single string in the table. How can I store this string in a variable and use it in any other index. Thank you 
In our environment (Phantom version 4.10.3.x), the HEC (HTTP Event Collector) server name that is used as an "Indexer Host" (i.e. Phantom UI field label for the HEC server for a "Distributed Splunk E... See more...
In our environment (Phantom version 4.10.3.x), the HEC (HTTP Event Collector) server name that is used as an "Indexer Host" (i.e. Phantom UI field label for the HEC server for a "Distributed Splunk Enterprise Deployment" ) was changed recently. The new server name was entered into the "Indexer Host" field, "Test Connection" was successful, and "reindex" was successful. However later it was noticed that no new event data was being ingested into the Splunk Enterprise phantom* indexes. The resolution was to restart Phantom and "reindex " again for the missing events in the phantom* indexes. It is suspected that the "process" for ingesting new events into the phantom* indexes is not updated with the changes to the  "Indexer Host" field until Phantom is restarted; however, the "processes" for "Test Connection" and "reindex" appeared to work without a Phantom restart. No references that a Phantom restart is required was found in the online documentation. Does anyone have more on this issue/bug/phenomenon and/or has anyone else experienced this issue/bug/phenomenon? 
I have a master dashboard with many icons that have drilldown settings enabled to open another dashboard in another tab. Recently, it has started this weird behavior of opening 2 tabs of the icon url... See more...
I have a master dashboard with many icons that have drilldown settings enabled to open another dashboard in another tab. Recently, it has started this weird behavior of opening 2 tabs of the icon url I click on instead of one. My exact settings are > Drilldown settings / On click link to custom url / the url I have in is correct and opens the right url that I want / and the box is checked for Open in a new Tab. I am not doing any user error like clicking too many times. This issue occurs when opening any drilldown link and the master dashboard historically did work properly by only opening one tab of the new dashboard search. Has anyone seen this before? It is a Splunk Enterprise instance and I am looking at the dashboard from a Mac.  Note: This is an issue because the drilldown url is another dashboard - its loading the same dashboard twice so when I click around it uses a lot of resources loading many dashboards unnecessarily. 
This is the log i am getting in splunk msg: 2022-01-22 03:00:00.143 INFO 15 --- [ scheduling-1PurgeProcessCountTask : engine:Engine12 Cleanable Process Instance Count {"exception_management_workfl... See more...
This is the log i am getting in splunk msg: 2022-01-22 03:00:00.143 INFO 15 --- [ scheduling-1PurgeProcessCountTask : engine:Engine12 Cleanable Process Instance Count {"exception_management_workflow":{"finishedCount":6621,"cleanableCount":1113}}   i want output like Engine                              finishedProcessInstanceCount Engine12                              6621     Could you please help me on that, i am trying below query but not working index=abc cf_app_name="DEV" |rex field=_raw "engine.(?<pam>.........) ,finishedProcessInstanceCount...(?<sam>..\d+)" | table pam, sam  
I have multiple dropdowns with numerous (some 40+) selections per dropdown. What I would like to do is create a "Common KPI" dropdown with 3 choices to drive selections in the other dropdowns. I can ... See more...
I have multiple dropdowns with numerous (some 40+) selections per dropdown. What I would like to do is create a "Common KPI" dropdown with 3 choices to drive selections in the other dropdowns. I can make this work for a single value but not when there are multi-select values.   For example:  Common KPI dropdown - Choice "1" would select the following:  Dropdown 2 selections: Apple, Banana  Dropdown 3 selection: Golf   Common KPI dropdown - Choice "2" would select the following:  Dropdown 2 selection: Charlie Dropdown 3 selections: Echo, Foxtrot, Hotel         <fieldset submitButton="false"> <input type="dropdown" searchWhenChanged="true"> <label>Dropdown 1</label> <choice value="1">1</choice> <choice value="2">2</choice> <default></default> </input> <input type="multiselect" searchWhenChanged="true" token="bar"> <label>Dropdown 2</label> <choice value="a">Alpha</choice> <choice value="b">Bravo</choice> <choice value="c">Charlie</choice> <choice value="d">Delta</choice> <default></default> </input> <input type="multiselect" searchWhenChanged="true" token="foo"> <label>Dropdown 3</label> <choice value="e">Echo</choice> <choice value="f">Foxtrot</choice> <choice value="g">Golf</choice> <choice value="h">Hotel</choice> </input> </fieldset>    
Hello,   I have a not ideal log, looking like this, for example: "field1=value1"  "field2=val ue 2" "field3=value3"   And I want to exlude the key-value pairs at index time. Combinations like t... See more...
Hello,   I have a not ideal log, looking like this, for example: "field1=value1"  "field2=val ue 2" "field3=value3"   And I want to exlude the key-value pairs at index time. Combinations like the first kv-pair is not problem. The second value however is a problem. With my extraction I can only get the "val" part, and the extraction stops at the whitespace.   My rule in transforms.conf looks like this:     [example] REGEX = (?<_KEY_1>([^=\"]+)=(?<_VAL_1>([^=\"]+)       To clarify, my results in splunk are looking like this: field1 = value1 field2 = val field3 = value3   I am not sure what I am missing.
| where like(RouteCode, "50%") AND !like(RouteCode, "503%") I am trying to show Routecode 501,2, -- anyother not 503.
My search is  like  this  index = idx source = src data_stamp = A  field1 = *lol* | table Field2    --> This generates a column  with only value which i need to store in some $VAR   index = i... See more...
My search is  like  this  index = idx source = src data_stamp = A  field1 = *lol* | table Field2    --> This generates a column  with only value which i need to store in some $VAR   index = idx  source = src data_stamp = B field1 = *lol* TEST = $VAR | table field 3