All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Ten seconds is far too often to refresh a dashboard.  Unless you have an automaton monitoring the dashboard and taking action on what it finds, 5 minutes is more reasonable.   It sounds like the das... See more...
Ten seconds is far too often to refresh a dashboard.  Unless you have an automaton monitoring the dashboard and taking action on what it finds, 5 minutes is more reasonable.   It sounds like the dashboard is occasionally encountering periods when there are too many other searches running so it has to wait for resources.  There is little a dashboard can do about that (other than not contributing to the problem by refreshing too frequently).   If multiple users are accessing the dashboard at the same time, consider replacing in-line searches with scheduled searches that are invoked by the dashboard using the loadjob or savedsearch command.  That will replace multiple executions of the same query with a single execution and each user will have the same view of the data.
We will create two indexes per application one for non_prod and one for prod logs in same splunk. They create 2 AD groups (np and prod). We will create indexes, roles and assign that to respective AD... See more...
We will create two indexes per application one for non_prod and one for prod logs in same splunk. They create 2 AD groups (np and prod). We will create indexes, roles and assign that to respective AD groups. Till here it is good.  Now we created a single summary index for all prod indexes data and we need to give access to that index to all app teams. Being single summary index, thought of filtering it at role level using srchFilter and service field, so that to restrict one user seeing other apps summary data Below is the role created for non-prod [role_abc] srchIndexesAllowed = non_prod srchIndexesDefault = non_prod Below is the role created for prod  [role_xyz] srchIndexesAllowed = prod;opco_summary srchIndexesDefault = prod srchFilter = (index::prod OR (index::opco_summary service::juniper-prod))  Not sure whether to use = or :: here to work? Because in UI when I m testing it is giving warning when I give = .. but when giving :: search preview results not working. Not sure what to give? Here my doubt is when the user with these two roles if they can search only index=non_prod if he see results or not? How this search works in backend? Is there any way to test? And few users are part of 6-8 AD groups (6-8 indexes). How this srchFilter work here? Please clarify 
Hi @ITWhisperer ,  I have tried this method. The host name on the log is structured like "hostname.abcgroup.com". I want to search like, "hostname*". Since, the hostnames are retrieved from lookup... See more...
Hi @ITWhisperer ,  I have tried this method. The host name on the log is structured like "hostname.abcgroup.com". I want to search like, "hostname*". Since, the hostnames are retrieved from lookup it's working as a static string search. Not filtering the host. I tried like this after get the data from lookup, | eval host_pattern=host."*" | table host_pattern But, this is also not working. I guess the Splunk may consider the wildcard * as string. Since, I am filtering like this. Any suggestion for this...
index=* [| inputlookup hostList.csv | table host ]
If neither the production logs nor the internal logs from the same UF are found then it's likely that the UF has lost its connection to the indexer(s). Verify the UF has the right settings in output... See more...
If neither the production logs nor the internal logs from the same UF are found then it's likely that the UF has lost its connection to the indexer(s). Verify the UF has the right settings in outputs.conf Verify the network still allows connections from the UF to the indexers. If you use TLS, confirm the certificate is still valid and that the UF has the correct password.
Hello all,  I am working on an Splunk query which suppose to filter some logs by utilizing data from lookup. Consider a field called host. I have list of host stored on an lookup (let's call the l... See more...
Hello all,  I am working on an Splunk query which suppose to filter some logs by utilizing data from lookup. Consider a field called host. I have list of host stored on an lookup (let's call the lookup as hostList.csv). Now, I want to retrieve the list of servers from the hostList.csv lookup. And then filter the field host with the retrieved set of list.  Note - I don't want use map command for this.  If is there any other way of pull off this logic. Please help me with example query and explanation.  Thank you!
whoa! i didn't know about mode=multivalue.  thanks!
The app was only 3 dashboards so I just created new dashboards by copying the source code
Finding the Cisco documentation and support hard to follow.  Netviz agent installed and running, java agent installed but not working.  Cisco Support advising me that I need a standalone Java applica... See more...
Finding the Cisco documentation and support hard to follow.  Netviz agent installed and running, java agent installed but not working.  Cisco Support advising me that I need a standalone Java application to attach the java agent to.  Haven't read this in the Network Visibility guidance.  Confused, can I add this to the app agent?  Anyone got steps on this? 
Hello @kiran_panchavat  @Rhidian  maybe we could filter special characters with REGEXP_REPLACE(sql_text… in the SQL query? Thanks.
mvfilter can only reference one field at a time Description This function filters a multivalue field based on an arbitrary Boolean expression. The Boolean expression can reference ONLY ONE field at... See more...
mvfilter can only reference one field at a time Description This function filters a multivalue field based on an arbitrary Boolean expression. The Boolean expression can reference ONLY ONE field at a time. https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/MultivalueEvalFunctions#mvfilter.28.26lt.3Bpredicate.26gt.3B.29 Try like this: | makeresults count=1 | eval timestamps = mvappend("1700000000", "1800000020") | foreach mode=multivalue timestamps [| eval older=if(<<ITEM>> < _time, mvappend(older,<<ITEM>>),older)]
Have you tried using single quotes to tell eval you're referring to a field name? | eval older = mvfilter(timestamps < '_time')  
I onboarded one production logs to splunk but after restarting the UF I am not able to see the recent logs also I am not able to see the recent internal logs. How to fix this issue please help?
@acambridge It' s a Developer Supported app Please contact https://www.qmulos.com/contact-us/ 
https://splunkbase.splunk.com/app/3079 Qmulos - developer is this a free or paid app? if paid, where can I find pricing? thanks A
I need to filter a list of timestamps which are less than _time. this works: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < 1570... See more...
I need to filter a list of timestamps which are less than _time. this works: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < 1570000010)   but the compared value is whatever is in _time.  this does not work: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval _time = 1570000010 | eval older = mvfilter(timestamps < _time)   I know timestamps work, because this does work: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval older = mvfilter(timestamps < now())   Why does now() and static values work, but this does not: | makeresults count=1 | eval timestamps = mvappend("1570000000", "1570000020") | eval now_time = now() | eval older = mvfilter(timestamps < now_time)   How can i get a variable in there to compare, since i need to compare the list to _time?
Unfortunately, it's the same with other indexes as well, including _* indexes. We tried with another user ID, and the issue is still the same.
@Bar_Ronen I would be interested.
On some our Windows UF hosts, we were getting System events but no Security events.  Our Windows admin noticed that the Splunk service account was running as an NT service.  After changing the servic... See more...
On some our Windows UF hosts, we were getting System events but no Security events.  Our Windows admin noticed that the Splunk service account was running as an NT service.  After changing the service account to LocalSystem, the Windows UF hosts started sending their security events.
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Gene... See more...
Hi Community, I'm exploring ways to ingest data into Splunk Cloud from a Amazon s3 Bucket which has multiple directories and multiple files to be ingested onto Splunk. Now, I have assessed the Generic s3, SQS-s3 and the Data Manager Inputs for AWS available on Splunk but am not getting the required outcome. My use case is given below: There's a s3 bucket named as exampledatastore, in that there's a directory named as statichexcodedefinition, in that there're multiple message Ids and Dates. The s3 example structure is given below: s3://exampledatastore/statichexcodedefinition/{messageId}/functionname/{date}/* - functionnameattribute Where the {messageId} and the {date} values are dynamic. And I have a start date to begin with but the messageId varies. Please can you assist me on this on how to get the data into Splunk. Many Thanks!