All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Please help me with splunk query to find below 2 things. 1. To find percent to events/logs utilized by differents Indexes. 2. To find percent to events /logs utilized by differents hosts. @isoutam... See more...
Please help me with splunk query to find below 2 things. 1. To find percent to events/logs utilized by differents Indexes. 2. To find percent to events /logs utilized by differents hosts. @isoutamo @thambisetty 
i have extracted from logs how many are running but not able to write query for how many are present in server. can anyone help on this???
hi All,IN the AWS inputs logs we are getting timestamps behind 2 hours and we need to adjust it to UTC + 02:00 . I have applied it in  in the props.conf on the HF where the aws input is configured as... See more...
hi All,IN the AWS inputs logs we are getting timestamps behind 2 hours and we need to adjust it to UTC + 02:00 . I have applied it in  in the props.conf on the HF where the aws input is configured as below[source::s3:/cloudfx-s3/*] TZ = UTC+02:00But it didnt worked , Can someone please let me know if its the right way to adjust the Timestamp in the logs ? 020-09-22 12:14:43 FCO50-C1 2253 5.171.196.19 GET d1q57ainn85gvl.TA_jvmjam.net /fe-api/v1/notifications 200 https://m.lego.it/scommesse-live Mozilla/5.0%20(Linux;%20Android%2010;%20Mi%209T%20Pro)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/85.0.4183.81%20Mobile%20Safari/537.36 =1600770582725 - Miss QumS5aHxkycZd-vjOLlapECGcIYloeTTUq4KursjmmdpHWotnCLDQ== m.lego.it https 2147 0.110 - TLSv1.3 TLS_AES_128_GCM_SHA256 Miss HTTP/2.0 - - 32299 0.110 Miss application/json;%20charset=utf-8 1895 - - 2020-09-22 12:14:43 IAD66-C1 23128 157.55.39.108 GET d1q57ainn85gvl.TA_jvmjam.net /slot-machine/wild-rails/ 200 - Mozilla/5.0%20(compatible;%20bingbot/2.0;%20+http://www.bing.com/bingbot.htm) - - Miss jG0oTG9mljNfR0k-NQ5R6u_EWH0v0cggDlPDLfzmOgPEMMJrDHCtiQ== www.lego.it https 296 0.594 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/1.1 - - 13054 0.468 Miss text/html;%20charset=utf-8 22053 - Here it is 12:14:43 but we need it as +2H as 14:14:43  
I've tried to reach the sales team due to prolongation of an existing enterprise license several times during the last week, but there is no response at all to the emails/phone calls, some contact da... See more...
I've tried to reach the sales team due to prolongation of an existing enterprise license several times during the last week, but there is no response at all to the emails/phone calls, some contact data also seems to be outdated. Could someone share an email address which is being monitored or a best practice to reach them? Thanks, Peter
Hi, We are having a Splunk Enterprise app and we would like to know that, is there any way we can write a query which shows a graph for used storage per month ? The idea behind that is, whether w... See more...
Hi, We are having a Splunk Enterprise app and we would like to know that, is there any way we can write a query which shows a graph for used storage per month ? The idea behind that is, whether we move to Splunk Cloud by seeing the statistics provide by the above query. Thanks
We have a single-server Splunk deployment on a small unique network where all hosts are powered down at night, including the Splunk server. This gives us an issue with tracking our license usage; the... See more...
We have a single-server Splunk deployment on a small unique network where all hosts are powered down at night, including the Splunk server. This gives us an issue with tracking our license usage; the 30-day license usage report shows no data. According to the Admin Manual, "If the license master is down during the time period that represents its local midnight, it will not generate a RolloverSummary event for that day, and you will not see that day's data in these panels." Is there a way to force Splunkd to initiate the process that creates the RolloverSummary license_usage.log at a custom time instead of at midnight?  
Hey all, Long story short, I have a Windows IIS FTP server on a Heavy forwarder that receives logs from Cisco proxy servers  and I am monitoring the FTP folders that contain Cisco proxy logs. I a... See more...
Hey all, Long story short, I have a Windows IIS FTP server on a Heavy forwarder that receives logs from Cisco proxy servers  and I am monitoring the FTP folders that contain Cisco proxy logs. I am having a problem whereby the logs uploaded to the FTP server have an owner of ciscoftp and Splunk is unable to read the files with this owner. If I set the file owner to administrators, Splunk is able to read and ingest the logs as required. Splunk is running as a local system account and I have granted "Everyone" full control of the folder for testing purposes but as long as the file owner is set to ciscoftp (a local user account) then Splunk is unable to read the file. I have another folder full of Cisco ESA logs and the file owner is set to administrator by default and Splunk is able to read these files out of the box. My issue is two-fold, 1) how to set the file owner to administrators by default and/or 2) how do I get Splunk to read files created by ciscoftp user? At this stage, it looks like I may need a script to set the permissions on the file on a periodic basis, which I don't really want to do. Has anyone experienced a similar issue? Any help would be awesome. Thanks, Trev
I am after some help to debug why Splunk is not monitoring my external .evtx files. Currently have the following:  %SplunkHome%/etc/system/local/inputs.conf   [monitor://E:WINEVT\Logs\*] disabl... See more...
I am after some help to debug why Splunk is not monitoring my external .evtx files. Currently have the following:  %SplunkHome%/etc/system/local/inputs.conf   [monitor://E:WINEVT\Logs\*] disabled = 0 index = event_collector sourcetype = WinEventLog   I have tried to debug this using Splunk list inputstatus and I can see that Splunk is reading the file but it is not getting indexed and I am getting output on my tcp stream with the indexer like this: x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\ I have also tried:    [WinEventLog://E:WINEVT\Logs\*] disabled = 0 index = event_collector sourcetype = WinEventLog   With no luck and no output on the TCP stream with theindexer. Any tips on debugging or solutions much appreciated.
Hello,  Im a newbie. Is there a point in using Splunk Enterprise Security in an air gaped network ? Or should I just use it in outside network ? 
Hello,  Im a newbie and im tasked to figure out Splunk at my work site. I have Splunk Enterprise  5GB and around 200 users. Can make them all forwarders ? 
After upgrade to 8.0.6 the search with "| REST " is failing when it's sent to the older versions but it is successful for the same version instances. In GUI, " | REST splunk_server=olderversion /ser... See more...
After upgrade to 8.0.6 the search with "| REST " is failing when it's sent to the older versions but it is successful for the same version instances. In GUI, " | REST splunk_server=olderversion /services/server/info " fails with the error below; WARN: [olderversion] Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/server/info?count=0&strict=false from server=https://127.0.0.1:8089 - Bad Request  
Hello, Is it a way to know the actual checkpoint value for a rising column in a db_inputs? I would like to obtain this value through a REST API and with a shell command even working when splunk is ... See more...
Hello, Is it a way to know the actual checkpoint value for a rising column in a db_inputs? I would like to obtain this value through a REST API and with a shell command even working when splunk is down. I am using db_connect 3.4.0 Thank Christian
Hi, I try to if saved search result hostname is matched, reload deploy-server with rest API. But When saved search runs, the deploy server is reloading every time. My saved search cron schedule 5 mi... See more...
Hi, I try to if saved search result hostname is matched, reload deploy-server with rest API. But When saved search runs, the deploy server is reloading every time. My saved search cron schedule 5 minutes periods work. There is a deploy server reload every 5 minutes. How I do, if hostname is matched , reload deploy server.   My savedsearches.conf [Syslog New Source Monitor] action.syslogmonitor = 1 alert.digest_mode = 0 alert.suppress = 0 alert.track = 0 counttype = number of events cron_schedule = */5 * * * * dispatch.earliest_time = -5m dispatch.latest_time = now display.general.type = statistics display.page.search.tab = statistics enableSched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = search request.ui_dispatch_view = search search = index=inotify \ | rex field=_raw "ISDIR\s(?<path>.+)" \ | eval orig_path="/var/log/splunk/syslog/" \ | eval new_path=orig_path . path \ | stats count by host new_path \ | eval API = case(host=="splunk-sch", \ [| rest /services/deployment/server/serverclasses//HF%201/reload],host=="splunkfwd", \ [| rest /services/deployment/server/serverclasses//HF%202/reload], 1=1, 0)             No result doesnt come back but case statement is running.    
Hi Everyone, I'm working on combining two lookups for a certain report. My question is: Let's say I have a first  look up named hosts.csv with hosts a,b,c,d,e,f and I have a second lookup dec... See more...
Hi Everyone, I'm working on combining two lookups for a certain report. My question is: Let's say I have a first  look up named hosts.csv with hosts a,b,c,d,e,f and I have a second lookup decom.csv with hosts a,b,c.   I want to compare two lookups  and take off the values of second lookup in the first lookup. So, I should get just the "d,e,f".. Please help me how to solve this. TIA.
Hi can y help me to create research for fortigate VPN User? statistics witch user, duration vpn and total gb default splunk app not have this detail, need know totale vpn user per day and detail ... See more...
Hi can y help me to create research for fortigate VPN User? statistics witch user, duration vpn and total gb default splunk app not have this detail, need know totale vpn user per day and detail   bye   this is app search (not have total event for user but all total event) | tstats summariesonly=true max(_time) AS NTime, last(log.system_event.vpn.tunnelname) AS Tunnel_Name, last(log.sentbyte) AS Sent, last(log.rcvdbyte) AS Received, last(log.system_event.vpn.tunneltype) AS Tunnel_Type, last(log.user) AS User, last(log.system_event.vpn.group) AS User_Group, last(log.system_event.vpn.duration) AS Duration_Sec FROM datamodel="ftnt_fos" WHERE nodename="log.system_event.vpn" log.sentbyte!=0 log.rcvdbyte!=0 log.devname="*" log.vd="*" log.system_event.vpn.tunneltype="*" log.user="*" groupby _time log.system_event.vpn.tunnelname | sort -_time | eval Received_MB = (Received/(1024*1024))| eval Sent_MB = (Sent/(1024*1024)) |sort -_time| convert ctime(NTime) as Time | table Time, Tunnel_Name, Tunnel_Type, User, User_Group, Sent_MB, Received_MB, Duration_Sec
i am using macros for this urls   here i have urls like  /accountinformationview /AccountInformationView /emailsubscription /EmailSubscription /logoff /Logoff in macr... See more...
i am using macros for this urls   here i have urls like  /accountinformationview /AccountInformationView /emailsubscription /EmailSubscription /logoff /Logoff in macros i am using only URL=/AccountInformationView OR URL=/EmailSubscription like this but i am getting both the URL i tried URL="/EmailSubscription" like this still same results how can i get exact URL mentioned in the macros
Hello friends. I have the log file below and I need to extract exactly the specified value from a line   attr_itx_is_online [int] = 0 attr_itx_is_locked [int] = 0 attr_itx_workbin_type_id [str]... See more...
Hello friends. I have the log file below and I need to extract exactly the specified value from a line   attr_itx_is_online [int] = 0 attr_itx_is_locked [int] = 0 attr_itx_workbin_type_id [str] = "TESTE" attr_itx_agent_id [str] = "TESTE_OLSONJU_6628" attr_itx_received_at [str] = "2020-09-22T17:45:01Z" attr_itx_submitted_at [str] = "2020-09-22T17:45:51Z" attr_itx_delivered_at [str] = "2020-09-22T18:47:06Z" attr_itx_placed_in_queue_at [str] = "2020-09-22T17:46:06Z"     I need to capture the values ​​that have this beginning and the date starts with attr_itx_submitted_at [str] ="2020-05" it's possible ?
Hi, I'm trying to get data in from a file where data is in the following format (anonymized): {"seq":55619,"ntp_time":[3809782725,1802580594],"reporting_id":{"tugid":"server","ep_type":"sip","side"... See more...
Hi, I'm trying to get data in from a file where data is in the following format (anonymized): {"seq":55619,"ntp_time":[3809782725,1802580594],"reporting_id":{"tugid":"server","ep_type":"sip","side":"SS","mac":"aa:bb:cc:dd:ee:ff","user":"username","dn":"43128"},"stream_id":{"sip_callid":"hexstring","local_uri":"sips:emailstring:5061","remote_uri":"sips:emailstring:5061;transport=tls","ep_stream_id":5053},"event":"rtcp_tx","rtcp_block":{"addr_local":"ipaddr:24794","addr_remote":"ipaddr5036","cname":"emailstring","snd_ssrc":680275594,"recv_ssrc":3888553685,"snd_pktcnt":206158433963,"snd_bcnt":4121132523374324448,"rx_loss_total":139753940844544,"rx_loss_fract":0,"rx_jtr":-139758235811834,"rtt":139753940844544},"rtp_stats":{"observed_pt":0,"observed_codec":"RTP_CODEC_G711_U"}} So, a nice JSON.  But, that pair of integers in ntp_time{} are seconds since 1/1/1900 and a fractional second, not 1/1/1970.  I'm really, really hoping I don't have to write a second script that writes out the correct timestamp. On my indexers, for the sourcetype I've defined for this, I've the following: [baddate] REGEX = ntp_time\":\[(?<baddate>\d+) INGEST_EVAL = gooddate = baddate - 2208988800 I also have props.conf calling the transform, and fields.conf setting "INDEXED=True" for baddate.  But I don't get the field in search yet.  Would this even work though?  Does anyone have any other strategies I can try?  I don't really care about the fractional second, but would work it in if I can get something to work.
Hello friends I am trying to Create alert which sends me list of source when number of failure events are more then 100.   basically, let alert get triggered when total count is 100, but in email ... See more...
Hello friends I am trying to Create alert which sends me list of source when number of failure events are more then 100.   basically, let alert get triggered when total count is 100, but in email send me summary of count by source, so its very easy to find which source has a problem.    Thanks 
Hi all, I'm trying to figure out how to get my hands on a list of IDs which are determined by referring to three events. I have to not use things such as join, transaction, or sub-search due to the ... See more...
Hi all, I'm trying to figure out how to get my hands on a list of IDs which are determined by referring to three events. I have to not use things such as join, transaction, or sub-search due to the event limits involved. Specifically, I have: Event A which contains a req_id Event B which contains the same req_id and a correlationId Event C which does not contain a req_id but contains the correlationId from Event B and also a personId that I need The reason I need to use Event A rather than just look at Event B and Event C is because there are numerous occurrences of Event B that are unrelated, so I first have to ensure they can be associated with an Event A. TLDR! I need the list of personId values that come from Event C, but first I need to make sure they are associated with Event A and Event B — the challenge being there is no one value contained by all three.