All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  I have suspecious behaviour of splunk when index log file. here is the issue when I search through yesterday log it only show events from 12:00 to 23:59! log from 00:00 to 12:00 missed!   I... See more...
Hi  I have suspecious behaviour of splunk when index log file. here is the issue when I search through yesterday log it only show events from 12:00 to 23:59! log from 00:00 to 12:00 missed!   I have two path that continiously index by splunk, like below:   index today /data/today   index yesterday /data/yesterday   index today: log will be copy on above path, every day  and contain log file from 00:00 to 12:00  of today index yesterday: log copy on above path, every day  and contain log file from 00:00 to 23:59 of yesterday   FYI: index and path completely different. FYI: log files are same and the only different is "today log" contain data till 12:00   any idea? Thanks
Hi  The customer requested check issue for MS O365 app. In the process of collecting data using continuously_monitor Mode, it is repeatedly confirmed that it does not work properly from the time m... See more...
Hi  The customer requested check issue for MS O365 app. In the process of collecting data using continuously_monitor Mode, it is repeatedly confirmed that it does not work properly from the time millisecond is added to the time value. This causes data collection to not work. Is this a bug in the o365 app? Have you ever seen or remembered a similar phenomenon? Refer attached img. Thanks in advance. Jiho  
Hi, I'm new in Splunk alerting and I met a problem on changing alert permission by using ACL REST API.  I'm writing a script to help me create Splunk alerts through REST API, and I use the saved/se... See more...
Hi, I'm new in Splunk alerting and I met a problem on changing alert permission by using ACL REST API.  I'm writing a script to help me create Splunk alerts through REST API, and I use the saved/searches endpoint to create a new alert and the everything goes well, the alert is created successfully. Then I add `{alert_name}/acl` into the url and attempt to update the alert's permission which I have created before.  But I receive an 403 error tell me "You do not have permission to change the owner of this object".     <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">You do not have permission to change the owner of this object.</msg> </messages> </response>     More confused is I can change the permission on Splunk Web GUI, I can see the "edit permission" button for this alert. BTW, my account doesn't have the admin role. I have no ideas on this problem and I don't know whether it related to account role or any capability. Does anyone encountered the same problem? Need your help and much appreciate!
Hello, Splunkers!! We are configuring Search Head clustering and when we init it, it gives a hostname error. However, init has been configured and so does bootstrap.  Also, there is no problem ... See more...
Hello, Splunkers!! We are configuring Search Head clustering and when we init it, it gives a hostname error. However, init has been configured and so does bootstrap.  Also, there is no problem deploying apps on clusters.  How do I fix the error below?  Huge thanks in advance!   WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Search head clustering has been initialized on this node. You need to restart the Splunk Server (splunkd) for your changes to take effect.
Hello, I have XML files with Multi Line field values and have some issues with extracting those values. Sample field extraction code for first 2 values and sample data/events are given below. Any he... See more...
Hello, I have XML files with Multi Line field values and have some issues with extracting those values. Sample field extraction code for first 2 values and sample data/events are given below. Any help will be highly appreciated. Thank you!   Sample code <USER>(?P<USER>.+)<\/USER>\\n\\r<USERTYPE>(?P<USERTYPE>.+)<\/USERTYPE>   Sample Data <MTData>                                 <USER>TEST05GLBC</USER>                                 <USERTYPE>Admin</USERTYPE>                                 <SUBJECT />                                 <SESSION>hp0vtlg001</SESSION>                                <SYSTEM>DS</SYSTEM>                                 <EVENTTYPE>USER_Supervisor</EVENTTYPE>                                 <EVENTID>VIEW</EVENTID>                                 <SIP>10.210.345.254</SIP>                                 <EVENTSTATUS>120</EVENTSTATUS>                                 <EMSG />                                 <STATUS>FALSE</STATUS>                                 <STIME>2022-06-02 19:10:57.967</STIME>                                 <VADDATA>2019:00-00002; 2019:00-0000002; 2019:00-00003</VADDATA>                                 <TIMEPERIOD />                                 <CODE />                                 <RTYPE />                                 <DTFTYPE />                                 <DIP>10.225.35.45</DIP>                            <DEVICE>Laptop</DEVICE>                 </MTData>                   <MTData>                                 <USER>TEST06HLDC</USER>                                 <USERTYPE>Power</USERTYPE>                                 <SUBJECT />                                 <SESSION>hp2ftlg021</SESSION>                                <SYSTEM>Test</SYSTEM>                                 <EVENTTYPE>USER_MANAGER</EVENTTYPE>                                 <EVENTID>Update</EVENTID>                                 <SIP>10.210.345.254</SIP>                                 <EVENTSTATUS>122</EVENTSTATUS>                                 <EMSG />                                 <STATUS>TRUE</STATUS>                                 <STIME>2022-06-02 19:20:57.967</STIME>                                 <VADDATA>2019:00-00012; 2019:00-0000002; 2019:00-00024</VADDATA>                                 <TIMEPERIOD />                                 <CODE />                                 <RTYPE />                                 <DTFTYPE />                                 <DIP>10.225.35.45</DIP>                             <DEVICE>Laptop</DEVICE>                 </MTData>  
Having trouble with my roles/groups mapping with SAML. Setting up Azure AD+SAML on a test host here and my claim for group is coming back like so "d5366c24-8188-xxxx-xxxx-65e599a64ed9" rather than t... See more...
Having trouble with my roles/groups mapping with SAML. Setting up Azure AD+SAML on a test host here and my claim for group is coming back like so "d5366c24-8188-xxxx-xxxx-65e599a64ed9" rather than the human readable "SplunkSSO" group name which I expect. Funny enough this works [roleMap_SAML] power = d5366c24-8188-xxxx-xxxx-65e599a64ed9 But I kinda expecting to have human readable groups to roles. I assume there is an error in Attributes and Claims in the Splunk Azure App. Not seeing it though. Any ideas where I might look?
Hi,  I am trying to get all events with two different kinds of objectname(A or B vs C) but with the same username and their access time should be close.  The accessTime of events with Objectname C s... See more...
Hi,  I am trying to get all events with two different kinds of objectname(A or B vs C) but with the same username and their access time should be close.  The accessTime of events with Objectname C should be happen just after the events with  Objectname A or B.  Here is my current query: index=index1 host=host1 ObjectName=A OR ObjectName=B |rename accessTime AS accTime1 | eval ptime=strptime(accTime1,"%Y-%m-%d %H:%M:%S") | join userName [ search index=index1 ObjectName=C | rename accessTime AS accTime2 | eval itime=strptime(accTime2,"%Y-%m-%d %H:%M:%S") ] | eval diff=abs(ptime-itime)/60 |appendpipe [|search diff<2] | timechart span=1day dc(userName) is there any way can help me optimize this query since when the search time window become to be 1 months or more, the subsearch limitations will influence the search result. Thanks!
A scheduler issue may be described as: - reduced number of completed scheduled searches running during certain periods - scheduler locks up and doesn’t run any scheduled searches for a period of ti... See more...
A scheduler issue may be described as: - reduced number of completed scheduled searches running during certain periods - scheduler locks up and doesn’t run any scheduled searches for a period of time - high number of skipped/deferred scheduled searches How can I provide Splunk Support the right diagnostics to solve my problem and determine root cause? 
I am befuddled why the below two searches return different counts for the same period of time. The tstats one returns a smaller count. I would expect them to be the same number with tstats just finis... See more...
I am befuddled why the below two searches return different counts for the same period of time. The tstats one returns a smaller count. I would expect them to be the same number with tstats just finishing faster. Anyone have thoughts on this?     | tstats count where index=* index!=_*     and     index=* index!=_* | stats count      
Uploading Splunk-Enterprise-Security package (800MB .spl file) from user machine to deployer via deployer web UI results in the following exception: 413 Request Entity Too Large nginx environment:... See more...
Uploading Splunk-Enterprise-Security package (800MB .spl file) from user machine to deployer via deployer web UI results in the following exception: 413 Request Entity Too Large nginx environment: Environment is Azure AKS Search Heads behind NGINX Ingress controller attempted to add the application via the Deployer instance Upload Page. Click Upload and it fails instantly with: 413 Request Entity Too Large nginx
Had to take an indexer down for several days while a SSD was replaced, I used the "splunk offline --enforce-counts" command to allow the data to replicate back out to the other indexers (we have repl... See more...
Had to take an indexer down for several days while a SSD was replaced, I used the "splunk offline --enforce-counts" command to allow the data to replicate back out to the other indexers (we have replication factor of 1).  I'm curious now after the SSD has been replaced, what is the best option to rejoin this host back to the cluster?
Hi everyone! Since I've never done | rex command, I would like to parse the ip_address out of the raw event using rex command. The event is: org.apache.sor.client.soj.impl.HttpSorClient$Exception... See more...
Hi everyone! Since I've never done | rex command, I would like to parse the ip_address out of the raw event using rex command. The event is: org.apache.sor.client.soj.impl.HttpSorClient$Exception: Error from server at https://pimcv.sps.g:443/sor: Failed handshake due to exhausted 12 seconds timeout on channel [id: 0x2c132bc6, L:/56.201.42.175:42 - R:/56.201.45.41:86]. Can somebody help do this please!
I need help in displaying the input radio button option based on previous input radio button option selection. If i have below options created as inputs :   <input type="Radio" token="envi... See more...
I need help in displaying the input radio button option based on previous input radio button option selection. If i have below options created as inputs :   <input type="Radio" token="environment"> <label >ENV<label> <choice value="site1">s1</choice> <choice value="site2">s2</choice> <choice value="site3">s3</choice> </input> <input type="Radio" token="sub-environment"> <label >S-ENV<label> <choice value="site1-Area1">s1A1</choice> <choice value="site1-Area2">s1A2</choice> <choice value="site1-Area3">s1A3</choice> <choice value="site2-Area1">s2A1</choice> <choice value="site2-Area2">s2A2</choice> <choice value="site2-Area3">s2A3</choice> <choice value="site3-Area1">s3A1</choice> <choice value="site3-Area2">s3A2</choice> <choice value="site3-Area3">s3A3</choice> <choice value="*">All</choice> </input>   I wan to dynamically display the input fields based on the first radio button option selection.     if user selects site1 radio button option automatically display radio button option labels  labels s1A1,s1A2, s1A3 and All    if user selects site2 radio button option automatically display radio button option labels  labels s2A1,s2A2, s2A3 and All    if user selects site3 radio button option automatically display radio button option labels  labels s3A1,s3A2, s3A3 and *  
hi I am fairly new to Splunk and inherited an environment and would like to know why some of our Dashboards source code starts with the <dashboard> tag where others don't have that tag and start with... See more...
hi I am fairly new to Splunk and inherited an environment and would like to know why some of our Dashboards source code starts with the <dashboard> tag where others don't have that tag and start with the <form> tag furthermore if I add the <dashboard> tag above the <form> tag (of course terminate it at the end of the code as well with </dashboard>) I get the following Alerts / error: This dashboard has no panels. Start editing to add panels.
Hello, All our Windows Application, Security & System logs are being forwarded to a central syslog-ng server (1 line per event).   On the syslog server we have the Splunk Heavy Forwarded installe... See more...
Hello, All our Windows Application, Security & System logs are being forwarded to a central syslog-ng server (1 line per event).   On the syslog server we have the Splunk Heavy Forwarded installed and I have been forwarding the logs on to Splunk Indexer. I'm trying to use the Windows TA Add-on and it requires the sourcetype to be WinEventLog and the source to be one of WinEventLog:Application, WinEventLog:Security or WinEventLog:System. So in the inputs.conf on the heavy forwarder I added the lines to each input; [monitor:///app/syslog-ng/logs/production-logs/siem_win_sec_log] sourcetype=WinEventLog source=WinEventLog:Security _TCP_ROUTING = SIEMIndexer [monitor:///app/syslog-ng/logs/production-logs/siem_win_app_log] sourcetype=WinEventLog source=WinEventLog:Application _TCP_ROUTING = SIEMIndexer [monitor:///app/syslog-ng/logs/production-logs/siem_win_sys_log] sourcetype=WinEventLog source=WinEventLog:System _TCP_ROUTING = SIEMIndexer Now when I search in the search head I am seeing that 2 or 3 or 4 log entries are being grouped as 1 big entry.  I played around with the source/sourcetype fields and found that the problem is only there when the source starts with WinEventLog. I found the [source::WinEventLog...] in props.conf and tried commenting it out partially or completely and it did not make any difference.  This was on the indexer and heavy forwarded in the /etc/system/local/props.conf. Is there anyway to get Windows Event Logs in syslog format in to Splunk in a way that the Windows TA Addon will recognize?  The will eventually be feeding in to Security Essentials.   Thank you, Dean
Hello Splunkers,  I have a query as follows    My query blah blah blah |stats latest(description) as description latest(result) as result latest(object) as object by host source _time   which gi... See more...
Hello Splunkers,  I have a query as follows    My query blah blah blah |stats latest(description) as description latest(result) as result latest(object) as object by host source _time   which gives the result as follows    As highlighted with yellow color on the above results there are two different time values one under _time and the other under description.    Now I want to filter the results for the hosts that has more than 24 hours in the difference between _time and the time in the description. Something like below  difference time = (_time - time_in_the_description) > 24 hours 
Hello, The Customer I'm supporting wants to configure splunk to ingest MECM data and generate reports form it.   I've tried looking through the splunk.doc's website but cant seem to find anything. ... See more...
Hello, The Customer I'm supporting wants to configure splunk to ingest MECM data and generate reports form it.   I've tried looking through the splunk.doc's website but cant seem to find anything.  Should I just use the SCCM doc as a guide or does anyone else know of any resources.  Thank you
Dear Community, I would like to get some assistance and/or clarification regarding Splunk’s base-search/post-processing functionality. I have read it/heard that using one base-search and post proce... See more...
Dear Community, I would like to get some assistance and/or clarification regarding Splunk’s base-search/post-processing functionality. I have read it/heard that using one base-search and post processing instead of several similar queries is cost effective, we can save SVCs (splunk virtual computes) with it. In practice, unfortunately I have experienced quite the opposite: Let’s say, I have a dashboard (call it “A”) with these queries:       index="myIndex" "[OPS] [INFO] event=\"asd\"" | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | stats dc(user_id) as "Unique users, who has logged ..." index="myIndex" "[OPS] [INFO] event=\"asd\"" | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | timechart count by result index="myIndex" "[OPS] [INFO] event=\"asd\"" | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | dedup user_id | timechart span=1h count as "per hour"| streamstats sum("per hour") as "total" index="myIndex" "[OPS] [INFO] event=\"asd\"" | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | timechart dc(user_id) as "Unique users" index="myIndex" "[OPS] [INFO] event=\"asd\"" | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Failed" AND reason != "bbb" | timechart count by reason       I cloned this “A” dashboard (let’s call the clone “B”). I got some issues, like I got no data, or the numbers were different on “B” than “A”, but after some googling, reading Splunk community, I managed to get the same results on “B” with: A base search:       index="myIndex" "[OPS] [INFO] event=\"asd\"" | stats count by user_id is_aaaaa_login environment result reason _time       Post-processes:       search | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | stats dc(user_id) as "Unique users, who has logged ..." search | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | timechart count by result search | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | dedup user_id | timechart span=1h count as "per hour"| streamstats sum("per hour") as "total" search | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Successful" | timechart dc(user_id) as "Unique users" search | where user_id != "0" AND is_aaaaa_login="true" AND environment="prod" AND result="Failed" AND reason != "bbb" | timechart count by reason       I have added ‘refresh=”180”’ to the top of these two dashboards and leave them open in my browser for about one hour (and the common date-picker was set to “last 24 hours”). After this, I was surprised when I saw that dashboard “A” in “Splunk App for Chargeback” consumed around 5 SVCs while dashboard “B” used around 15 SVCs. So the dashboard with the base-search was way more expensive than the “normal” one. I thought that it will be much cheaper. Why is that? Did I construct my base/post-process queries badly? If yes, what should I change? I searched a lot, I found only one comment on Splunk community here: https://community.splunk.com/t5/Dashboards-Visualizations/Base-Search-for-dashboard-optimization/m-p/348795 “However, I do not recommend it when dealing with large data because base search is slow.” which implies that maybe base search is not always a cheaper solution?! So I executed only my base-search in Splunk for a 24 hours interval, it gave back a table with around 3,000,000 rows. Does this mean a large data set? Should I forget using base-searches? Thank you very much for your help!
Hi to All, I need help with creating an Active Directory changes report.  I used Win Events like 4728, 4729, 4730 but could not print to PDF  Is there a search that will return all changes crea... See more...
Hi to All, I need help with creating an Active Directory changes report.  I used Win Events like 4728, 4729, 4730 but could not print to PDF  Is there a search that will return all changes creation, deletion of global groups?  Thank you!
Need some help. I can't wrap my head around this. Need to lookup a csv which contains clientip, and compare against my results with IP also in field clientip to show in a new column as matching or ... See more...
Need some help. I can't wrap my head around this. Need to lookup a csv which contains clientip, and compare against my results with IP also in field clientip to show in a new column as matching or not matching  | index=foo  .... [|inputlookup IPlist.csv | fields clientip | rename clientip AS knownIP] | eval isMatching = if(clientip == knownIP, "matching", "notmatch") | table clientip, field x, field y, field z, isMatching Am I way off base here? Should I be looking at other commands? I get zero results with this. Without it, my main search runs fine and many events with IPs show. Much appreciated