All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am newbie in splunk. I would like to enhance the searching speed. I am using a splunk instance in a VM (Master) that indexed different data (more than 10 imdexes at this moment). Can I create mo... See more...
I am newbie in splunk. I would like to enhance the searching speed. I am using a splunk instance in a VM (Master) that indexed different data (more than 10 imdexes at this moment). Can I create more search head (VM- SH 1 and SH 2) to speed up the search and how can I achieve it? Thank you 
Hi, As the query states, I have a stats command that gives me a table with all the required columns. While viewing the stats during search, using table command I am able to reorder the columns to t... See more...
Hi, As the query states, I have a stats command that gives me a table with all the required columns. While viewing the stats during search, using table command I am able to reorder the columns to the desired view. However, when I write the output to a csv using outputlookup, the columns get re arranged in alphabetical order of their names. Is there any way we can override this? Thanks for the help!
Hello, I can see in many Use Cases examples that tokens are using alternately in drill down searches: $user$ and user=$user$ In both cases the user fields is available in SPL and log fields. Is... See more...
Hello, I can see in many Use Cases examples that tokens are using alternately in drill down searches: $user$ and user=$user$ In both cases the user fields is available in SPL and log fields. Is there any difference with using  $user$ or user=$user$? If no difference - is something actually better? Thank you.
I want to create Index in the Splunk cloud environment. I have knowledge of creating in on-premise Splunk. 
Hi  I have extracted a field username and it has domain and user username= "google\\john" username="googleuser" username = "admin" I need calculated to be created in props.conf where google... See more...
Hi  I have extracted a field username and it has domain and user username= "google\\john" username="googleuser" username = "admin" I need calculated to be created in props.conf where google should go to domain and john should go to user field This domain field will be there only in certain logs So whatever is there before "\\" should be considered as domain and after "\\" is user In some cases domain wont be there, for those cases username to be tagged to user field All this should happeb at backend props.conf domain = google user= john user = googleuser user = admin
I want to get a list of firewalls and servers sending logs to splunk. What query should i use ?
Hi All, is anyway to monitor SSL certificate expiry for any website  
I am using Splunk DB connect to push my data from Splunk to oracle database. However, I can't not figure out how to avoid pushing same data into database. For example, if I specify earliest=-5m and m... See more...
I am using Splunk DB connect to push my data from Splunk to oracle database. However, I can't not figure out how to avoid pushing same data into database. For example, if I specify earliest=-5m and make execution schedule every second, same data will be pushed into database. I have tried to set earliest=-5m and make execution schedule every 5 minutes. But it still have a little bug. Assume next execution time is 12:00:00, the time field in the data is 11:59:59 and upload to Splunk takes 2 seconds, then the time that data being stored in Splunk is 12:00:01. At this time, this data may not be pushed to databased since Splunk automatically catch the time in the time field of data. Anyone know how to solve this?  
Please let me know where can I get latest product/technical presentation? I checked it at Partnerverse, but I couldn't find them in Resources > Content library. I'm a reseller of splunk BTW.
Hi all, we are having a little trouble finding the cause of the active universal forwarder status changing to disappear and become active again. We have also checked from the network side and there... See more...
Hi all, we are having a little trouble finding the cause of the active universal forwarder status changing to disappear and become active again. We have also checked from the network side and there are no problems related to the network. this doesn't always happen but is very annoying operationally. architecturally we have one search head, one indexer, 1 heavy forwarder DC1, and 1 heavy forwarder DC2. for universal forwarders we have thirty five DC1 to heavy forwarder DC1 and thirty five universal forwarders DC2 to heavy forwarder DC2. We really ask for help regarding this problem and thank you for your attention Thanks. 
Hello Splunk lovers!  I have problem on Splunk Cluster. I had 3 SH and 3 Indexers. On indexer-1 installed deployer, indexer managament and cluster master. On cluster master i see indexer-1 sear... See more...
Hello Splunk lovers!  I have problem on Splunk Cluster. I had 3 SH and 3 Indexers. On indexer-1 installed deployer, indexer managament and cluster master. On cluster master i see indexer-1 search head and indexer too. B but its should not be search head, should be indexer only So, wtf and how can i set him only indexer? thanks you for help! 
Can anyone create a  playbook in SOAR to get emails from exchange server based on a message_id that is already in the artifact. also use some query in the message to search for some keyword.
    index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomai... See more...
    index=mail | lookup email_domain_whitelist domain AS RecipientDomain output domain as domain_match | where isnull(domain_match) | lookup all_email_provider_domains domain AS RecipientDomain output domain as domain_match2 | where isnotnull(domain_match2) | stats values(recipient) as recipient values(subject) as subject earliest(_time) AS "Earliest" latest(_time) AS "Latest" by RecipientDomain sender | where mvcount(recipient)=1 | eval subject_count=mvcount(subject) | sort - subject_count | convert ctime("Latest") | convert ctime("Earliest")     i have list of suspicious keywords to in a list in lookup editor called suspicoussubject_keywords.   can you include the query to lookup for this keyword in subject and then display results?   in another use case , i have a list not to show the following subject  filtersubjects  in lookup. This will not display the results where there are the following words like CV, Resume in the subjects can you help me with the query ?
Hi,  I am creating REST API'S for fetching data from an external server. I was able to successfully create the API. Question/Problem: I have two apps on the same Splunk instance 1) Developmen... See more...
Hi,  I am creating REST API'S for fetching data from an external server. I was able to successfully create the API. Question/Problem: I have two apps on the same Splunk instance 1) Development 2) Production Both of them have a similar code for the REST API's.  in restmap.conf [script:build] match=/build script=build.py scripttype=persist handler=application.build When I call the REST API from either Development or Production application the call is being made to the API in Development application. I have tried using serviceNS/<app_name>/<user_name>/<rest_endpoint>  But the calling is still being made to the API under Development. How can I create the REST API in a such a way that it belongs to the APPs namespace. P.S. I need to keep the code same for both Production and Development application.
How to extract local user account in Splunk from syslog messages in unix audit.log? Thanks
Hello Splunk Experts, I am using the Squid Proxy dashboard, and the TA squid plugin, etc. Works well. I notice the dashboard uses the "sourcetype=squid" to gather data. On the splunk documentatio... See more...
Hello Splunk Experts, I am using the Squid Proxy dashboard, and the TA squid plugin, etc. Works well. I notice the dashboard uses the "sourcetype=squid" to gather data. On the splunk documentation for the proxy, the site suggests using the "squid:access;recommended" for the Squid Proxy TA  plugin. I have modified the inputs.conf file on my forwarder to use index=squid, and sourcetype=sqid, but I have the squid.conf file using the splunk recommended log format - everything works. My question is could we use this app to determine the bandwidth for a given time frame? I see values: bytes, bytes_in (mostly = 181 for many events), and bytes_out. Can any of these values provide information on total bandwidth or usages? thanks, eholz1  
Hi I have a field, mode, which returns either returns data or is None (mode_true, mode_false). I'm trying to search and aggregate the sum [over time] of both outcomes over separately then plot both o... See more...
Hi I have a field, mode, which returns either returns data or is None (mode_true, mode_false). I'm trying to search and aggregate the sum [over time] of both outcomes over separately then plot both on a timechart (as a stacked bar or similar, where the data is separated categorically). I've tried a couple query types below with no luck 1.)   index=mode_qa module=mode_producer_logs "telemetry_event" | extract pairdelim="\"{,}" kvdelim=":" | multisearch [search mode!=None | eval type="mode_true"] [search mode=None | eval type="mode_false"] | timechart span=60m count(mode) by type   2.)   index=mode_qa module=mode_producer_logs "telemetry event" | extract pairdelim="\"{,}" kvdelim=":" | eval modeBool = if(mode!=None, "mode_true", "mode_false") | timechart count by modeBool span=30m     Both display a visual with only one data source (not binary, e.g. mode_true vs mode_false). Any ideas? New to Splunk
Saw a previous thread about this 7 years ago marked as "Fixed", we are using the most up to date version of the App in Splunk Cloud, I updated the setup object and removed the href line and changed d... See more...
Saw a previous thread about this 7 years ago marked as "Fixed", we are using the most up to date version of the App in Splunk Cloud, I updated the setup object and removed the href line and changed dashboard=true, this is now allowing me to view the setup dashboard and upload our service account key. The only issue that remains, is not being able to see the Data Input -> Google Spreadsheet.    I did a test install on our HF and could see the input there, but not in our cloud instance. We do have other Google Apps/Add-Ons installed in our cloud instance, so I wonder if the original issue re-manifested itself?  Original Issue: https://community.splunk.com/t5/All-Apps-and-Add-ons/No-new-input-type-as-quot-Google-Spreadsheet-quot-after-install/td-p/262203
I'm new to Splunk and having a tough time getting this to work...Trying to create an alert if the current 24hr time range is 30% less than the previous 24hr time range 3hrs ago...Any help is apprecia... See more...
I'm new to Splunk and having a tough time getting this to work...Trying to create an alert if the current 24hr time range is 30% less than the previous 24hr time range 3hrs ago...Any help is appreciated!       index=_internal source="*metrics.log" group="per_sourcetype_thruput" series="aws:cloudwatch" earliest=-27h latest=-3h | eval b=len(_raw) | eval MB=b/1024/1024 | timechart span=24h count as MB | timewrwap 1 series=short        
I have a field called "Node_ID" that I extracted from another field "issue" that is formatted as N1234. There were some events that didn't fit and couldn't extract normally so I used eval to identify... See more...
I have a field called "Node_ID" that I extracted from another field "issue" that is formatted as N1234. There were some events that didn't fit and couldn't extract normally so I used eval to identify them. | eval Node_ID=if(like(issue, "WC SVR%"), "WC SVR", Node_ID), Node_ID=if(like(issue, "EU SVR%"), "EU SVR", Node_ID), Node_ID=if(like(issue, "SE SVR%"), "SE SVR", Node_ID), Node_ID=if(like(issue, "NE SVR%"), "NE SVR", Node_ID) This does what I want and adds those values to the field Node_ID but when I try to search by one of them: | search Node_ID="WC SVR" I get zero results even though I can see there are 4 events when looking at the field in the sidebar. Is there a reason behind this? ** Any suggestions to do this another way are always appreciated**