All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi ,  I have a list of allowed IP addresses and want to use splunk to find any windows login from a source Ip other than the one I have on my list . Can you help me write a query please ? Thank yo... See more...
Hi ,  I have a list of allowed IP addresses and want to use splunk to find any windows login from a source Ip other than the one I have on my list . Can you help me write a query please ? Thank you  The events I get in splunk are security application and system 
Hello, I'm new to Splunk and I'm looking for some advice. My search, e.g.     <mysearch> | table attributes     returns a value in the following format: name[test 1]srcintf[int1]dstintf[int2... See more...
Hello, I'm new to Splunk and I'm looking for some advice. My search, e.g.     <mysearch> | table attributes     returns a value in the following format: name[test 1]srcintf[int1]dstintf[int2]srcaddr[address1]dstaddr[dest1 dest2]service[svc1 svc2 svc3]comments[test comment here] I would like to split the output into individual fields. The values are within square brackets, i.e. name test 1 srcintf int1 dstintf int2 ...     Many thanks!
Where can I find User Instructions for searching for a block of hashes on a regular basis, and emailing an alert if any one of them are detected?
Hi ,  I am trying to figure out how to write a query to create an alert that will alert me whenever a user is logged on to the machine more than 12 hours . Can you please help me figure this out . ... See more...
Hi ,  I am trying to figure out how to write a query to create an alert that will alert me whenever a user is logged on to the machine more than 12 hours . Can you please help me figure this out . Thank you 
We are adding zscaler proxy to be used by Splunk TA o365.  Our security group is providing a Root CA 4 pem file for us to use.  Our Splunk environment runs on RHEL and our enterprise is Splunk v8.2.... See more...
We are adding zscaler proxy to be used by Splunk TA o365.  Our security group is providing a Root CA 4 pem file for us to use.  Our Splunk environment runs on RHEL and our enterprise is Splunk v8.2.1. The splunk user (configured .bashrc) has http and https proxy environment variables  set to the correct entries.   In addition, we have this variable defined: export REQUESTS_CA_BUNDLE=$SPLUNK_HOME/etc/auth/our_pem.pem When splunk starts up we see this error and it fails to retrieve any events from the remote site. Error is: requests.exceptions.SSLError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /url-path-made-up/oauth2/token (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) 2022-01-13 15:36:41,891 level=INFO pid=25373 tid=MainThread logger=splunksdc.collector pos=collector.py:run:254 | | message="Modular input exited."   Talking to our security team they are wondering where the o365 TA is looking for certificates. Any help to get us past this error? Also, reading some older Answers it appear SSL Verification is turned off by default.  This is very important to us because we have more Splunk TAs that will need to talk to zscaler proxy.  Thank you
Waiting for web server at https://127.0.0.1:8000 to be available......................................... it has stopped here. No error in splunkd.log  
Dear all,   I'm trying to ingest dat from FireEye HX instance into Splunk but I cannot find the correct way for this FireEye specific instance. Does anyone have the same exact integration or no? i... See more...
Dear all,   I'm trying to ingest dat from FireEye HX instance into Splunk but I cannot find the correct way for this FireEye specific instance. Does anyone have the same exact integration or no? if yes please post the reference in the answers section.   Thanks in advance.
Hello, due to a Windows systems with wrong system/date (date was set in 2034) the _internal index in my Splunk environment has this situation There's a way to remove the future events from this... See more...
Hello, due to a Windows systems with wrong system/date (date was set in 2034) the _internal index in my Splunk environment has this situation There's a way to remove the future events from this index?   Thanks a lot  
Hello, in my deploy server, that act as a LM,  i cannot see Licese Usage Report for 30-day period. It always shows "No results found". For the "today" report i can see data. Looking at ../var/splu... See more...
Hello, in my deploy server, that act as a LM,  i cannot see Licese Usage Report for 30-day period. It always shows "No results found". For the "today" report i can see data. Looking at ../var/splunk folder i can see the license_usage.log file but there's no type=Rollover_Summary inside. There's only type=Usage. Could you help me check this issue? Thanks a log
Hi, Could you help me why the values for the Y-Axis is not being set correctly? I specified 6000 with interval of 500 but I am getting 5446 as attached. I also want to know how I can update the... See more...
Hi, Could you help me why the values for the Y-Axis is not being set correctly? I specified 6000 with interval of 500 but I am getting 5446 as attached. I also want to know how I can update the X-axis to display the data per week instead of per Month. I tried using span by I am not getting a good results. I am using the following: index=xxxxx sourcetype=xxxx EXPRSSN=IBM4D* | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | table EXPRSSN DATE MIPS | eval _time=strptime(DATE." "."00:00:00","%Y-%m-%d %H:%M:%S") | chart list(MIPS) over _time by EXPRSSN
Hello, Thank you in advance for your help. I have a query that returns results containing a list of names and another query that also returns a list containing names. I would like to make a report... See more...
Hello, Thank you in advance for your help. I have a query that returns results containing a list of names and another query that also returns a list containing names. I would like to make a report indicating the names present in the first query but not present in the second query (delta of both queries)
I am looking for an solution for table row expansion and collapse through with button click. if expand button is clicked it should expand all the rows in table and same vice versa for collapsing. H... See more...
I am looking for an solution for table row expansion and collapse through with button click. if expand button is clicked it should expand all the rows in table and same vice versa for collapsing. How to do it with javascript?. please help me with it.
we are not sure how data i stored in IBM ALM and what concept of storage it is, now we are looking for an solution to collect and index data to splunk directly from IBM ALM . do anyone have idea or ... See more...
we are not sure how data i stored in IBM ALM and what concept of storage it is, now we are looking for an solution to collect and index data to splunk directly from IBM ALM . do anyone have idea or solution implemented how to collect data or idea how to approach it?
index IN (A,B) sourcetype IN (A,B) earliest=-12h latest=@m | transaction UUID keepevicted=true | eval ReportKey="Today" | append [search index IN (A,B) sourcetype IN (A,B) earliest=-12h-1w latest=... See more...
index IN (A,B) sourcetype IN (A,B) earliest=-12h latest=@m | transaction UUID keepevicted=true | eval ReportKey="Today" | append [search index IN (A,B) sourcetype IN (A,B) earliest=-12h-1w latest=@m-1w | transaction UUID keepevicted=true | eval ReportKey="LastWeek" | eval _time=_time+60*60*24*7]  | timechart span=30m count(linecount) as Volume by ReportKey | fields _time,Today,LastWeek as this search taking more time to load so i am trying to modify the search can you please me with this. Thanks in advance Veerendra
Hello, I am trying to format multi-value cell data in a dashboard table using mvmap in an eval token before passing it on to a drilldown, however I am unable to figure out how to format the eval fun... See more...
Hello, I am trying to format multi-value cell data in a dashboard table using mvmap in an eval token before passing it on to a drilldown, however I am unable to figure out how to format the eval function and if this approach would work at all. I would appreciate if someone could tell me why this function fails. I have included a test dashboard which shows sample data (sample column) and the format that I would like to create (test column). Unfortunately, the 'temptoken' token never gets evaluated. Note, I understand that I could use different workarounds to avoid using mvmap in an eval token, such as creating a hidden field in the table and use it for drilldown, or using different eval functions (depending on the use case). I am specifically interested in the format of using mvmap in an eval token, as this function could be really useful in more complex cases that I have to deal with. <dashboard> <label>mvmap in eval token</label> <row> <panel> <table> <search> <query> <![CDATA[ | makeresults | fields - _time | eval sample = "text1 -> text2,text3 -> text4" | eval sample = split(sample, ",") ``` the SPL above this line will generate the sample data ``` | eval test = mvmap(sample, split(sample, " -> ")) ]]> </query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <condition match="$click.name2$==&quot;sample&quot;"> <!-- This eval function is not working --> <eval token="temptoken">mvmap('row.sample', split('row.sample', " -> "))</eval> </condition> <condition match="$click.name2$==&quot;test&quot;"> <eval token="temptoken2">'row.test'</eval> </condition> </drilldown> </table> </panel> </row> <row> <html> <p> temptoken: $temptoken$ </p> <p> temptoken2: $temptoken2$ </p> </html> </row> </dashboard>   Best Regards, Robert
Hi,   I have been trying to deploy the Enterprise Security 7 days free trial Sandbox for days now without success.   Each time I attempt to subscribe, i get the following error   Proxy Error T... See more...
Hi,   I have been trying to deploy the Enterprise Security 7 days free trial Sandbox for days now without success.   Each time I attempt to subscribe, i get the following error   Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request Reason: DNS lookup failure for: uw2-iteng-prd-ss-cf-cloud-trial-1821109459.us-west-2.elb.amazonaws.com   Additionally, a 502 Bad Gateway error was encountered while trying to use an ErrorDocument to handle the request.   I dont know why this is happening.    can someone please advice me on what to do?
Can I manage summary index gaps? my scheduled searches missed and now I need to gap data on my summary index
Hi, I have a dashboard that uses a base search for all the panels. when I run the base search outside of the dashboard - it takes like 7 seconds to complete. but when I open the dashboard - the pa... See more...
Hi, I have a dashboard that uses a base search for all the panels. when I run the base search outside of the dashboard - it takes like 7 seconds to complete. but when I open the dashboard - the panels are completely loaded only after 1 minute or more. The dashboard has many panels and many filters and tokens - could it affect the slow performance?  how can I improve the performance?   Thanks.  
Hi All, I have done a index search for disk data and then lookup to the CSV to check as per the Application which servers data need to be displayed in the dashboard panel. can some one suggest me h... See more...
Hi All, I have done a index search for disk data and then lookup to the CSV to check as per the Application which servers data need to be displayed in the dashboard panel. can some one suggest me how to get the application data in CSV as per Application and then pull the disk performance data from the index. Please suggest. as I am do the below. but not able to use the sv_value in index search. | inputlookup Server_details.csv | search Application="app name" | stats dc(Server) as "Count of Server", values(Server) as Server by Application | eval Server = mvjoin(Server, " OR ") | stats values(Server) as sv_value Please suggest. Regards, Nayan
I added two new indexers to our 10-indexer "cluster" (we have replication factor of 1 so I'm using the quotes, because it's really more of a simple distributed search setup; but we have master node a... See more...
I added two new indexers to our 10-indexer "cluster" (we have replication factor of 1 so I'm using the quotes, because it's really more of a simple distributed search setup; but we have master node and we can rebalance so it counts as a cluster ;-)) and I ran rebalance so the data would get redistributed across whole environment. And now I'm a bit puzzled. Firstly, the 2 new indexers are stressed with datamodel acceleration. Why is it so? I would understand if all indexers needed to re-accelerate the datamodel but only those two? (I wouldn't be very happy if I had to re-accelerate my TB-sized indexes but I'd understand). I did indeed start the rebalancing around 16:30 yesterday. Secondly - I can't really undersand some of the effects of the rebalancing. It seems that even after rebalancing the indexers aren't really well-balanced. Example: The 9th one is the new indexer. I see that it has 66 buckets so some of the buckets were moved to that server but I have no idea why the average bucket size is so low on this one. And this is quite consistent across all the indexes - the numbers of buckets are relatively similar across the deployment but the bucket sizes on the two new indexers are way lower than on the rest. The indexes config (and most of the indexers' config) is of course pushed from the master node so there should be no significant difference (I'll do a recheck with btool anyway). And the third one is that I don't know why the disk usage is so inconsistent across various reporting methods. | rest /services/data/indexes splunk_server=*in* | stats sum(currentDBSizeMB) as totalDBSize by splunk_server  Gives me about 1.3-1.5T for the new indexers whereas df on the server shows about 4.5T of used space. OK. I correlated it with | dbinspect index=* | search splunk_server=*in* | stats sum(rawSize) sum(sizeOnDiskMB) by splunk_server  And it seems that REST call gives the size of raw size, not of the summarized data size. But then again, the dbinspect shows that: 1) Old indexers have around 2.2 TB of sum(rawSize) whereas new ones have around 1.3T. 2) Old indexers have 6.5TB of sum(sizeOnDiskMB), new ones - 4.5T 3) On new indexers the 4.5T is quite consistent with the usage reported by df. On old ones there is about 1T used "extra" on filesystems. Is it due to some unused but not yet deleted data? Can I identify where it's located and clean it up?