All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am a source file which has  events with 2 different file formats. How would I write  TIME_FOMAT for my PROPS Configuration file. any help will be highly appreciated. Thank you..   2021/05... See more...
Hello, I am a source file which has  events with 2 different file formats. How would I write  TIME_FOMAT for my PROPS Configuration file. any help will be highly appreciated. Thank you..   2021/05/30 16:28:12    JAVA_OPTION_USESERVERCIPHERSUITESORDER:-DuseServerCipherSuitesOrder=true 2021/05/30 16:28:12    JAVA_OPTION_XMX:-Xmx196m 2021-05-30 16:28:27.872  INFO 28709 --- [           main] .c.r.b.i.e.ConsulBootstrapPropertyLoader 2021-05-30 16:28:43.677  INFO 28709 --- [           main] com.sas.studio.ApplicationPetrichor
Is there any difference in placing an `etc/passwd` file in place as opposed to using an `etc/system/local/user-seed.conf` during a scripted install? As I understand it the `user-seed.conf` produces ... See more...
Is there any difference in placing an `etc/passwd` file in place as opposed to using an `etc/system/local/user-seed.conf` during a scripted install? As I understand it the `user-seed.conf` produces an `etc/passwd` if the latter doesn't exist and then the former is removed?
Example: a series of events all have the same incident number (1170820) outlining the lifecycle of the ticket (from open to close). I want to be able to create a field for when the incident was clos... See more...
Example: a series of events all have the same incident number (1170820) outlining the lifecycle of the ticket (from open to close). I want to be able to create a field for when the incident was closed so that the time can be easily identified on a dashboard.  (Time | Closed Time | Computer Name | ETC). How can I isolate a sub-search to track the latest ticket in a series of incidents so that I can set a field for "Closed time"?    
I am getting an error after loading Splunk forwarder on a Linux server (this same load is on other Linux servers with no issues): 6a257470b sp:7ffeb4a673a0 error:0 in liboneagentproc.so[7fd6a2560000+... See more...
I am getting an error after loading Splunk forwarder on a Linux server (this same load is on other Linux servers with no issues): 6a257470b sp:7ffeb4a673a0 error:0 in liboneagentproc.so[7fd6a2560000+84000] Aug 12 12:04:18 ladcivrnvpt03 kernel: traps: splunkd[66184] trap invalid opcode ip:7fd6a257470b sp:7ffeb4a673a0 error:0 in liboneagentproc.so[7fd6a2560000+84000] I run the ./splunk start --accept-license and it appears to start, but if you ps -eaf | grep splunk, nothing is started.  No log files are generated in .../splunkforwarder/var/log/splunk for the splunkd process either.
Hello, I was trying to write PROPS configuration file following sample events... 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwa... See more...
Hello, I was trying to write PROPS configuration file following sample events... 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462|   since it has pipe "|"..here is what I wrote..but not working... Any help will be highly appreciated...thank you so much.. SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) INDEXED_EXTRACTIONS = psv TIME_FORMAT = %Y%m%d %H:%M:%S:%Q TIMESTAMP_FIELDS = TIMESTAMP  
Hello, What would be my TIME_FORMAT for prop configuration file for this events 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwa... See more...
Hello, What would be my TIME_FORMAT for prop configuration file for this events 2021-06-08T13:26:53.665000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| 2021-06-08T13:26:54.478000-04:00|PGM|mtb1120ppcdwap6|vggtb|26462| I wrote this not covering entire range TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%f%z   Any help will be highly appreciated. Thank you so much.    
Hi  I have seen that when I am doing a post request to "https://splunk_host:8088/services/collector/event" with validate_cert=False its successfully sending the data to Splunk from my application. W... See more...
Hi  I have seen that when I am doing a post request to "https://splunk_host:8088/services/collector/event" with validate_cert=False its successfully sending the data to Splunk from my application. Where as when I tried with validate_cert=True i am getting errors like "Self signed Certificate error " or  Cannot connect to host localhost:8088 ssl:default [[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)] so what should I do to not get this error. 
We use cribl for field extraction. `Action` is a field that is being parsed from cribl and it should be a indexed field in splunk. Did a initial search with the query "index=client* sourcetype=uni... See more...
We use cribl for field extraction. `Action` is a field that is being parsed from cribl and it should be a indexed field in splunk. Did a initial search with the query "index=client* sourcetype=unix_auth"  This returns 6 failure in the last 4 hours.  When I use the search "index=client* sourcetype=unix_auth action=fail*". It returns all 6 failed events.  when I then change the search to "index=client* sourcetype=unix_auth action=failure" It does not return any events.  But when I use the " :: " in the search "index=client* sourcetype=unix_auth action::failure" It returns all the events.  Sample event:  
I have a data in Splunk like index="main" Fname Country fname1 USA fname1 USA fname3 USA   I want to add and change some data where Fname="fname1"  I want to edit that Country... See more...
I have a data in Splunk like index="main" Fname Country fname1 USA fname1 USA fname3 USA   I want to add and change some data where Fname="fname1"  I want to edit that Country = UK and add field of Phone =123   The final data will be   Fname Phone Country fname1 123 UK fname1 123 UK fname3   USA How can I do that?
Hai , I have total 6 checkboxes in my dashboard,   by default first checkbox is in active , if I check the second check box , first checkbox should be unchecked automatically, in the same way i... See more...
Hai , I have total 6 checkboxes in my dashboard,   by default first checkbox is in active , if I check the second check box , first checkbox should be unchecked automatically, in the same way if I check the third one that third checkbox  should be in active , remaining all should be inactive. Every time only one checkbox should be Active (i.e., what ever I checked) please help me, it would be appreciated, Thankyou.
Hi All, I am using below query to search for certain logs: index=int_gcg_apac_solace_166076 host="mwgcb-csrla0*U*" source="/logs/confluent/connect-distributed/apac/TW/*" "Task is being killed and w... See more...
Hi All, I am using below query to search for certain logs: index=int_gcg_apac_solace_166076 host="mwgcb-csrla0*U*" source="/logs/confluent/connect-distributed/apac/TW/*" "Task is being killed and will not recover until manually restarted" | rex field=_raw "(?ms)id\=(?P<Connector>(\w+\.){1,9}\w+\-\d)\}" | lookup region_lookup.csv "source" But while using the command | lookup region_lookup.csv "source", its not getting me any result based on the lookup table for the Region.  I am trying to create a query using lookup table which will be as below: source Region /logs/confluent/connect-distributed/apac/HK/* HongKong /logs/confluent/connect-distributed/apac/SG/* Singapore /logs/confluent/connect-distributed/apac/AU/* Australia /logs/confluent/connect-distributed/apac/VN/* Vietnam /logs/confluent/connect-distributed/apac/MY/* Malaysia /logs/confluent/connect-distributed/apac/ID/* Indonesia /logs/confluent/connect-distributed/apac/TH/* Thailand /logs/confluent/connect-distributed/apac/TW/* Taiwan   Note: Each source have multiple folders inside it e.g. "logs/confluent/connect-distributed/apac/TW/*" will have file paths like "logs/confluent/connect-distributed/apac/TW/kafkaconnect.log", "logs/confluent/connect-distributed/apac/TW/kafkaconnect.log1", "logs/confluent/connect-distributed/apac/TW/kafkaconnect.log2" and so on..  And the searched indicator "Task is being killed and will not recover until manually restarted" may go into any of the folders. Is there any way I can use, so that I can use the lookup table as desired..? Your kind advise will be highly appreciated.. Thank You..!!
Is there a trick to adding  "value", "type" and "color" to the 'to' entities in this viz?  I'm only getting the default values for recipients in one way relationships.  
I have a query   index = "index1" |spath output=error_code input=RAW_DATA path=MsgSts.Cd |dedup SESSIONID |stats count as Total sum(eval(if error_code=2,1,0))) as Error by OPERATION |eval Rate = ro... See more...
I have a query   index = "index1" |spath output=error_code input=RAW_DATA path=MsgSts.Cd |dedup SESSIONID |stats count as Total sum(eval(if error_code=2,1,0))) as Error by OPERATION |eval Rate = round ((Error/Total)*100,2) |search Rate>20 |table OPPERAION Rate Error   And the table is OPERATION | Rate   | Error VerifyOTP     | 24.08 | 310 Which is what I want because I want to know which OPERATION have more than 20% error rate in a certain time range. But now the hard part, is I want an alert to send to my email the details of all 310 errors event that show above. Since I use stats command, the only information I got left is Total, Error, Rate and OPERATION. How do I get the detail events when the rate hit >20% according to my search ?
Hello, I need a help with using wildcards in lookup.  I want to exclude from search results fields, which are located in lookup. Example: Search: host=*  | search NOT [| inputlookup test_lookup.c... See more...
Hello, I need a help with using wildcards in lookup.  I want to exclude from search results fields, which are located in lookup. Example: Search: host=*  | search NOT [| inputlookup test_lookup.csv] Lookup test_lookup.csv containts two fields: exe,comm /usr/sbin/useradd,*   I need to exclude all results which include exe="/usr/sbin/useradd" and any *comm* field. I added WILCARD(comm) in lookup definitions, but it doesn't work. transforms.conf [test_lookup.csv] batch_index_query = 0 case_sensitive_match = 1 filename = test_lookup match_type = WILDCARD(comm)   What I did wrong? Thank you.
Can I use perpetual free license for commercial purpose. @splunk 
index="www1" sourcetype="access_combined_wcookie" action=* status<=400 | timechart span=1d count(action) by clientip useother=f | addtotals | eval type = if(Total>90 ,"UP","DOWN") | fields _time ... See more...
index="www1" sourcetype="access_combined_wcookie" action=* status<=400 | timechart span=1d count(action) by clientip useother=f | addtotals | eval type = if(Total>90 ,"UP","DOWN") | fields _time 194.* *.*.*.* Total type | sort - _time   I want to change the order of the x-axis field names when using it. | fields _time 194.* *.*.*.* Total type   Is there any other way than this?
Hi, I would like to extract particular digit from brackets, index it as follows and based on that create stats hourly. Each time is picking this up with bracket as a string. This is service whic... See more...
Hi, I would like to extract particular digit from brackets, index it as follows and based on that create stats hourly. Each time is picking this up with bracket as a string. This is service which is making entry every hour, once will recognize to add up will present digit , if not will be 0. My goal would be to have stats from every hour on the graph to see how does it changes.  
My Seach Head receice Windoweventlog://Application and system but it's not found [Windowseventlog://Security]. I'm using Splunk_TA_windows. This is my config inputs.conf in local. [WinEventLog://App... See more...
My Seach Head receice Windoweventlog://Application and system but it's not found [Windowseventlog://Security]. I'm using Splunk_TA_windows. This is my config inputs.conf in local. [WinEventLog://Application] -> it works disabled = 0 index = wineventlog start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 renderXml=false [WinEventLog://Security] -> not work disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist1 = EventCode="4662" Message="Object Type:(?!\s*groupPolicyContainer)" blacklist2 = EventCode="566" Message="Object Type:(?!\s*groupPolicyContainer)" renderXml=false index = wineventlog [WinEventLog://System] -> it works disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 renderXml=false index = wineventlog ###### Forwarded WinEventLogs (WEF) ###### [WinEventLog://ForwardedEvents] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 ## The addon supports only XML format for the collection of WinEventLogs using WEF, hence do not change the below renderXml parameter to false. renderXml=flase host= WinEventLogForwardHost index = wineventlog This TA is copied from another server working fine. Even i'm using domain admin to run service but still not get windows event log security.
newuser = service.users.update(username=modify_username,roles=modify_role).refresh() AttributeError: 'Users' object has no attribute 'update'
I have an index = 'telemetry' which gets data from a local directory on standalone Splunk installation. I deleted some data from above index which came in from particular directory using command... See more...
I have an index = 'telemetry' which gets data from a local directory on standalone Splunk installation. I deleted some data from above index which came in from particular directory using command index='telemetry'  source="/data/01/*" | delete The above index has still more data from other sources (e.g. "/data/02" ..) I want to re-index the data from deleted directory i.e. "/data/01" again. Running splunk clean eventdata involves deleting entire index. I want to wipe from disk only that part of data that has been deleted above so that I can re-index it back. How can I achieve this?