All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone. I want to make a statistic of tickets. How many are opend everyday by CI Name. And I wnat to add an AVG of Tickets opened over all days. CI Name 2021-01-20 2021-01-20 2021-01-... See more...
Hello everyone. I want to make a statistic of tickets. How many are opend everyday by CI Name. And I wnat to add an AVG of Tickets opened over all days. CI Name 2021-01-20 2021-01-20 2021-01-20 AVG Test 1 5 1 1 2,3 Test 2 3 3 3 3             Thats my current search:   index=prod_test | dedup dv_number | eval openday=strftime(strptime(opened_at, "%Y-%m-%d %H:%M:%S"), "%Y-%m-%d") | chart count(dv_number) as anzahl by dv_cmdb_ci openday | sort anzahl desc  
Hi I want to gather log file of several machine that performance of those machine really important for me, now these question come to my mind. 1-when i use forwarder how much of resource use by spl... See more...
Hi I want to gather log file of several machine that performance of those machine really important for me, now these question come to my mind. 1-when i use forwarder how much of resource use by splunk forwarders?(i know it depend to lots of parameters but need to know approximately how much estimation) 2-is it better to use syslog to gather log in centralize server after that splunk analyze them? (Splunk forwarder compress and encrypt logs when send it to log server that use resources) 3-any best practice or use case exist for this aim?   Thanks,
Looking to find what ES usecases are there that use Certificate and/or Alert datamodels
Hi All, We runs only one splunk instance within our network and plans to open IP ranges and ports in order to collect Teams log. I wonder if I have to open all addresses and ports from here: https... See more...
Hi All, We runs only one splunk instance within our network and plans to open IP ranges and ports in order to collect Teams log. I wonder if I have to open all addresses and ports from here: https://docs.microsoft.com/en-us/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide#skype-for-business-online-and-microsoft-teams Do we have to open all of the IPs and ports? Isn't it risky from a security point of view?   Any comment would be appreciated.  
Hi Everyone, I need to monitor resource status in failover cluster manager - any suggestion how can we implement it.. I found this link related to this - https://community.splunk.com/t5/Deploym... See more...
Hi Everyone, I need to monitor resource status in failover cluster manager - any suggestion how can we implement it.. I found this link related to this - https://community.splunk.com/t5/Deployment-Architecture/How-to-monitoring-a-log-in-a-cluster-failover-environemnt/m-p/25794 just want to know is there any other way to monitor it .
Hi Team, I want to get notified if some one creates a field extractions in the Search head or upload or create a lookup file so that i want to get the user information who has created the field extr... See more...
Hi Team, I want to get notified if some one creates a field extractions in the Search head or upload or create a lookup file so that i want to get the user information who has created the field extractions or uploaded or created a lookup and what is the file name as well along with the timestamp. So that it will be useful for tracking purpose. So kindly help with the query.    
Hi Team, I want to create and schedule an alert  with two scenarios. In first case i have an ample of hosts for which if there is no logs getting ingested into Splunk for more than 15 minutes then i... See more...
Hi Team, I want to create and schedule an alert  with two scenarios. In first case i have an ample of hosts for which if there is no logs getting ingested into Splunk for more than 15 minutes then it should trigger an email alert. And another requirement is that the host may be any host (*) and if there are no alerts from any of the host then it should trigger an email to the team. So for first case consider this data as example : Host abc, def, ijk, mne, zda, and so on.   So kindly help with the query.  
hi on a windows server how would i start the streamfwd process? I had to END the process in taskmgr but now I want to start it again.
After I updated the password in the TA setup, not seeing any data and ta_QualysCloudPlatform.log showed the following error: [MainThread] ERROR: Authentication Error, but we're using stored creds, s... See more...
After I updated the password in the TA setup, not seeing any data and ta_QualysCloudPlatform.log showed the following error: [MainThread] ERROR: Authentication Error, but we're using stored creds, so we will sleep for 300 seconds and try again, as this is a temporary condition. Retry Count: 1      
Hi Splunkers,   I am facing a strange issue like the splunk forwarder stopped forwarding data. I see the forwarder is working fine and as per the splunk logs I see The monitor input cannot produce... See more...
Hi Splunkers,   I am facing a strange issue like the splunk forwarder stopped forwarding data. I see the forwarder is working fine and as per the splunk logs I see The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data I tried restarting the indexers and forwarder. It works for a while and after that it stops.  Could you please advice. Thanks, Amit
Hi  I have seen a significant traffic increase (Network In ) in our environment. However  i tried  investigating though in Splunk but i  i am finding difficulties to validate the cause ? Any help o... See more...
Hi  I have seen a significant traffic increase (Network In ) in our environment. However  i tried  investigating though in Splunk but i  i am finding difficulties to validate the cause ? Any help or guidance to a potential solution will be much appreciated. Thank you!    
Hi All, I have events with text strings like this:     ..._Code/> <InDesc>Diagnosis=Read Code,Comment=carrying | ladder and triped and  fell hurt  L Shoulder / upper back- issues is pain,DiagnosisS... See more...
Hi All, I have events with text strings like this:     ..._Code/> <InDesc>Diagnosis=Read Code,Comment=carrying | ladder and triped and  fell hurt  L Shoulder / upper back- issues is pain,DiagnosisSide=right</InDesc> <First_Name... I want to redact the blue text and can easily do so with this sort of thing:     SEDCMD-test = s/(<InDesc>)[^<]+/\1Splunk_Redacted/g Giving a result like:    ..._Code/> <InDesc>Splunk_Redacted</InDesc> <First_Name.. BUT I would prefer to retain the structure of the blue text (i.e. replace the digits with 9 and the letters all with A or a but leaving the rest.    I can do that part individually like this: echo "Diagnosis=Read Code,Comment=carrying | ladder and triped and  fell hurt  L Shoulder / upper back- issues is pain,DiagnosisSide=right" | sed y/abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789/aaaaaaaaaaaaaaaaaaaaaaaaaaAAAAAAAAAAAAAAAAAAAAAAAAAA9999999999/ And getting this output Aaaaaaaaa=Aaaa Aaaa,Aaaaaaa=aaaaaaaa | aaaaaa aaa aaaaaa aaa  aaaa aaaa  A Aaaaaaaa / aaaaa aaaa- aaaaaa aa aaaa,AaaaaaaaaAaaa=aaaaa But how can I combine them both to achieve this output:     ..._Code/> <InDesc>Aaaaaaaaa=Aaaa Aaaa,Aaaaaaa=aaaaaaaa | aaaaaa aaa aaaaaa aaa  aaaa aaaa  A Aaaaaaaa / aaaaa aaaa- aaaaaa aa aaaa,AaaaaaaaaAaaa=aaaaa</InDesc> <First_Name.. Any thoughts would be much appreciated.   Thanks, Keith
We are currently setting up an on-premise geo server and our controller is Saas. I believe the geo server URL needs to be publicly accessible from the controller for this to work. Also, each indivi... See more...
We are currently setting up an on-premise geo server and our controller is Saas. I believe the geo server URL needs to be publicly accessible from the controller for this to work. Also, each individual IP addresses need to be defined in the local geo-ip-mappings.xml as per https://docs.appdynamics.com/display/PRO41/Use+a+Custom+Geo+Server+For+Browser+RUM. Anything more I need to take care of any suggestions that would be helpful ^ Edited by @Ryan.Paredez for readability. 
We have data ingesting into Splunk via HEC token, and observed the time parsing of the event is not taking properly. Example - In the event the timestamp looks like 2020-12-01 09:59:18.0674, but in ... See more...
We have data ingesting into Splunk via HEC token, and observed the time parsing of the event is not taking properly. Example - In the event the timestamp looks like 2020-12-01 09:59:18.0674, but in the Splunk it was capturing 12/1/20 9:59:18.000 AM. Here missing the millisecond in the Splunk time but it's not limited to the millisecond.. sometimes the second field are not correct.. We tried applying the time format and time prefix for the source and sourcetype as below, but it is not fixing the issue. TIME_PREFIX = "Date": " TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N And also tried the props.conf below; [the_sourcetype] AUTO_KV_JSON = false INDEXED_EXTRACTIONS = json TIMESTAMP_FIELDS = Date We use collector/event REST endpoint. Splunk version 7.2.8.
We are seeing delay in indexing - this started to happen after the AWS TA 5.0.3 upgrade from 4.0.6. In the TA log there are many ERROR messages which appears to be much relevant.  
All,  I have this search here and it's pretty slow. Any recommendations to speed it up? Currently 250.249 seconds and that just seems high.   index=osnixsec sourcetype=linux:audit host=*domain.net... See more...
All,  I have this search here and it's pretty slow. Any recommendations to speed it up? Currently 250.249 seconds and that just seems high.   index=osnixsec sourcetype=linux:audit host=*domain.net earliest=-7d@d latest=-2h@h NOT [ search index=osnixsec sourcetype=linux:audit host=*domain.net earliest=-2h@h latest=now | fields host  | dedup host | table host ] | fields host | dedup host | table host
Hello fellow Splunkers, I'm using Splunk Eventgen for simulating some data records that are required to test certain queries. I want to generate 1000 events (each event corresponds to a unique servi... See more...
Hello fellow Splunkers, I'm using Splunk Eventgen for simulating some data records that are required to test certain queries. I want to generate 1000 events (each event corresponds to a unique service Id represented using a field svcId) in an interval of 5 minutes. Therefore I expect 1000 svcIds to be generated every 5minutes with one and only one event per svcId in each 5 minute interval. However when I implemented this using a sample app with the required eventgen.conf and a sample record I see that there are 3 records  generated per svcId within every 5 minute interval .  Based on the data observed from the logs in the eventgen code , I think modinput code is spawning three threads by default and each thread is generating data independently based on the eventgent.conf inputs. I have played around with some of the other settings in the eventgen.conf like maxIntervalsBeforeFlush, maxQueueSize, delay etc , but so far unsuccessful .Not sure what am I doing wrong here. Appreciate the help from the gurus here , who can help me understand what is being done wrong here. Thanks . Below are the configurations that I use for my test app eventgen.conf [seo] sampletype = csv interval = 300 count = 1000 outputMode = splunkstream token.0.token = (timeRecorded=\d+)000, token.0.replacementType = timestamp token.0.replacement = %s token.1.token = (svcId=\d+) token.1.replacementType = integerid token.1.replacement = 1000 token.2.token = lag-105:355.(\d+) token.2.replacementType = integerid token.2.replacement = 1000 token.3.token = (policerId=2) token.3.replacementType = static token.3.replacement = 2 token.4.token = (timeCaptured=\d+)000, token.4.replacementType = timestamp token.4.replacement = %s token.5.token = (allOctetsDropped=\d+) token.5.replacementType = static token.5.replacement = 0 token.6.token = (allOctetsForwarded=\d+), token.6.replacementType = random token.6.replacement = integer[1000000:9999999] token.7.token = (allOctetsOffered=\d+), token.7.replacementType = static token.7.replacement = 0   Sample file (seo) index,host,source,sourcetype,"_raw" "main","test_host2","test_source","test_src_type","timeRecorded=1611533616000,svcId=13088157,0,lag-105:355.1513,policerId=2,timeCaptured=1611535424000,,,,,allOctetsDropped=0,allOctetsForwarded=2924133555698,allOctetsOffered=292713155698,,,,,minimal"
Relatively new to splunk but after a few challenges I have my splunk deployment up and running. I've limited this to 7 specific window event codes, namely (4776,4720,4723,1102,4624,4726,4625). I do... See more...
Relatively new to splunk but after a few challenges I have my splunk deployment up and running. I've limited this to 7 specific window event codes, namely (4776,4720,4723,1102,4624,4726,4625). I dont have the stick with these but I had advice from Splunk that this would be a good start.   However I now need to turn these into a good set of security use cases. I probably need to expand my inputs.conf file as currently its deliberately limited:   [WinEventLog://Application] disabled = 1 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 whitelist = 4776,4720,4723,1102,4624,4726,4625 index=wineventlog renderXml=true   [WinEventLog://System] disabled = 1 start_from = oldest current_only = 0 checkpointInterval = 5 renderXml=true   So my question is to get some good security event logs from what I have configured above, do I just work with this and try to get the search terms to match what I'm trying to achieve or is there anything more fundamental I need to change first?   Also, if there are any good resources for windows security searches that would be great.   Thanks!
Hello All, i have a default app which gets installed on the UF during the installation (part of our install script). to disable this app, i create the app on deployment server with same name and cha... See more...
Hello All, i have a default app which gets installed on the UF during the installation (part of our install script). to disable this app, i create the app on deployment server with same name and changed the value of state from enabled to disabled.  when i checked uf it is still remaining as enabled. am wondering why it is doing that? can’t i disable the app on UF but install the app
Hi all, I was wondering if anyone knew of a way to reset default search time period inside the metrics workspace. The default seems to be an hour, which is independent of default period set in ... See more...
Hi all, I was wondering if anyone knew of a way to reset default search time period inside the metrics workspace. The default seems to be an hour, which is independent of default period set in the $SPLUNK_HOME/etc/system/local/ui-prefs.conf file which I've set to 1 day [search] dispatch.earliest_time = 1d@d dispatch.latest_time = now options that are provided in the config in /opt/splunk/etc/apps/splunk_metrics_workspace/README/workspace.conf.example seem to provide a lookback option [metadata] earliest = -1d but this does not affect the lookback time indicate earlier. Also the /opt/splunk/etc/apps/splunk_metrics_workspace/README/workspace.conf.spec option does not seems to change the default -1hour lookback [metadata] earliest = -1m #* Sets how far back to query the metrics catalog. #* Shortening this could speed up the catalog query. #* Default: -2d (?default seems to be 1 hour) Does not change the default time,  beyond the README folder documentation seems limited, any help is appreciated, Roelof @fcannon_splunk @kvarnun_splunk