Hi @riposan, You should calculate the duration before formatting the lastlogin. Please try below; | stats max(_time) as lastlogin by user
| eval n=time()
| eval durationday = n-lastlogin
| eval today=strftime(n,"%m-%d-%Y %H:%M:%S.%Q")
| eval durationday= tostring(durationday,"duration")
| table user,lastlogin,today,durationday
... View more
Hi @danielteachesit, You can use -f option to restart faster. This option forces restart without gracefully closing files, connections, etc. If you are in a dev environment and do not care about data indexing. ./splunk restart -f
... View more
Hi @GaetanVP, Actually, you do not need anything specific to distinguish. They are already different hosts. You can easily search using the host like below; index=apache host=dev Or you can check host field to distinguish the environment
... View more
Hi @manhalmoussa, Your MAX_TIMESTAMP_LOOKAHEAD setting seems wrong, please try this. MAX_TIMESTAMP_LOOKAHEAD=25
TIME_FORMAT=%Y-%m-%dT%H:%M:%S%:z
... View more
Hi @mailwimp, If you need only from and to fields, you can try below; index="sendmail_logs" host=relay*
| stats values(from) as from values(to) as to by qid
... View more
Hi @Anthony3rd, You can lower all usernames; index="its_sslvpn" host=*SIRA* user=*@* date_mday=15
| eval user=lower(user)
| stats dc(user) as user_count by date_month
... View more
Hi @Anthony3rd, You can try below sample, it will show the unique user count on the 15th day of the month. index="its_sslvpn" host=*SIRA* user=*@* date_mday=15
| stats dc(user) as user_count by date_month
... View more
@finchy, You can use below sample, it will search for "text_to_search" value in all lookups. It is better to limit lookup files by filtering title. | rest /servicesNS/-/-/data/lookup-table-files f=title
| fields title
| dedup title
| map maxsearches=1000 search="|inputlookup $title$ | fieldsummary |eval lookup_name=$title$ | fields values field lookup_name "
| spath input=values
| rename {}.* as *| fields lookup_name field value
| search value="text_to_search"
... View more
Hi @JohnWilly, Actually, I didn't use backup/restore for ITSI but it should work if Splunk and ITSI versions are the same on the destination.
... View more
Hi @qcjacobo2577, Can you try as blacklist3 ? If you are sending Splunk_TA_windows to UF it has already blacklist1 defined. blacklist3 = EventCode="4103" Message="(?:Host Application =)\s+(?:.*WINDOWS\\CCM\\SystemTemp\\+.*)"
... View more
Hi @JohnWilly, 1- Since ITSI is using a lot of data models adding a new SHC to the indexer cluster will cause duplicate acceleration tsidx files. This will put an extra load into the system for CPU, I/O and storage space. That's why I would not do it. 2- You can use full backup/restore procedure. Create a full backup of ITSI 3- Since Splunk Enterprise 7.3.5 supports ITSI 4.7 you should be able to upgrade. But because of backup/restore you may have the same upgrade problem. My advice is upgrade Splunk Enterprise and ITSI to a supported version.
... View more
Hi @Khuzair81, I assume you have a lookup as lookup.csv contains src field and trying to compare src field values in your events src field does not exist in lookup. You can try the below as a sample. index=_internal NOT [|inputlookup lookup.csv | fields src | format]
... View more
Hi @ejose, I cannot think of a reason why "name" field name is not working, but if you managed to extract the field as "name1" you can simply rename it on search head on sourcetype settings. You will not need to rename everytime you need.
... View more
Hi @GaetanVP, Yes, they work on the same machine. Syslog server writes logs to the filesystem, on the same machine HF or UF will use monitor input to ingest the data.
... View more
Hi @ejose, If your event is proper JSON you can use spath command. It will extract all fields, index=json_data | spath If you have mix data containing some JSON fields inside an event, you can use spath only for that field; Assuming your json field name is json_field index=json_data | spath input=json_field
... View more
Hi @GaetanVP, Let's assume you configured HF to listen TCP 5140 port and configured F5 to send to TCP 5140. F5 connects to HF TCP5140 but since TCP connections have both source and destination ports, F5 randomly needs to assign a source port for this connection (59697), which you see in the log files. It does not mean it is trying to connect to this 59697 port. This log is only for debugging inside firewall logs etc. Listening directly TCP ports on Splunk is not recommended. Most probably your HF is having performance problems or cannot process data as fast as needed. At that time F5 detects that blocking and tries to restart the connection. The best practice is using a Syslog server (Rsylog or SyslogNG) to listen to TCP/UDP ports and writes to a file, HF monitors this file and ingests data. You can see below links for further information https://conf.splunk.com/files/2017/slides/the-critical-syslog-tricks-that-no-one-seems-to-know-about.pdf Or you can use Splunk Connect for Syslog https://splunk.github.io/splunk-connect-for-syslog/main/faq/
... View more
Hi @GaetanVP, TcpInputProc logs show the connection source in <src_ip_address>:<src_port> format. 59697 is the source port of your F5 device. Since it is retrying to connect HF to send data, this source port changes every time.
... View more
Hi @AL3Z, Could you please share some samples? You can make regex with case insensitive option but I need to see sample events to cover your second case,
... View more
Hi @Stephcg, Can you please try below? index=gateway source=http:source-test sourcetype=sourcetype_teste-gateway toState=OPEN
| timechart dc(host) as count by circuitBreakerName
... View more
Hi @AruBhende, If you see all correlation ids, they should be different between stages. Can you paste a sample log containing both stages?
... View more
Hi @bud4, Please try below sample, there is no need to use join command; (index=index1 host=tnsm123*) OR (index=index2 sourcetype=db)
| stats latest(Status) as Status latest(StartTime) as StartTime latest(EndTime) as EndTime latest(AvgRunTime) as AvgRunTime latest(BypassFlag) as BypassFlag by JobName
... View more