All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @rolypolytoyy , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma... See more...
Hi @rolypolytoyy , good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi Has anyone had issues monitoring splunk servers with zabbix agents. Just monitoring no integration or log ingestion? Thanks M
What defines the start and end of the error text in each of those examples and how much of that do you want to get in error_message You could very simply do this | rex "\]:\s(?<error_message>.*)" ... See more...
What defines the start and end of the error text in each of those examples and how much of that do you want to get in error_message You could very simply do this | rex "\]:\s(?<error_message>.*)" which would take everything after the ]: to the end of the event
Hi, I'm trying to setup a way to automatically assign notables to the analysts, and evenly. The "default owner" in the notable adaptive response wouldn't help as it will keep on assigning the same r... See more...
Hi, I'm trying to setup a way to automatically assign notables to the analysts, and evenly. The "default owner" in the notable adaptive response wouldn't help as it will keep on assigning the same rule to the same person. Is there a way to shuffle the assignees automatically?
The starting point for correlating two datasets together is to combine them into a single search then uses stats to combine them through a common field. So, you should start with this approach ...s... See more...
The starting point for correlating two datasets together is to combine them into a single search then uses stats to combine them through a common field. So, you should start with this approach ...search... | table _time os host user clientName clientAddress signature logonType | convert mktime(_time) as epoch | sort -_time | inputlookup append=t Request_admin_access.csv ... now use eval+stats to join and collapse the data events and the lookup events together, e.g. | stats values(*) as * by host You seem to have host in both data and lookup Map is certainly not the right tool for this job.
Try this | rex mode=sed "s/(<Message>).*( is pending)(<\/Message>)/\1request\2\3/"  
How to deal with the empty fields between . Example there is empty field between passwd and after home directory  userid:passwd: :/home/John: : 
Can you share an example of the data where you have each type of status - if status values are being extracted for some events, but not others, it would indicate your data is not in a standard format... See more...
Can you share an example of the data where you have each type of status - if status values are being extracted for some events, but not others, it would indicate your data is not in a standard format. Where is your data coming from and perhaps you can share an anonymised version of it.
You need to use your result tokens, not just some name out of the blue. https://docs.splunk.com/Documentation/Splunk/latest/Alert/EmailNotificationTokens
@nickhills Just came across your comment, which made me chuckle, that the appinspect process encourages us to use definitions - while I agree with the principle of using definitions, I would say that... See more...
@nickhills Just came across your comment, which made me chuckle, that the appinspect process encourages us to use definitions - while I agree with the principle of using definitions, I would say that a hard failure is not exactly an encouragement - it's a pointblank computer say NO   
im getting this error from connection in  DB connect "There was an error processing your request. It has been logged (ID xxxx)" . I've done manually copy query in db_inputs.conf but doesnt worxs. ca... See more...
im getting this error from connection in  DB connect "There was an error processing your request. It has been logged (ID xxxx)" . I've done manually copy query in db_inputs.conf but doesnt worxs. can help me?
Hi  I am trying to set up an alert with the following query for the tickets that is not assigned to someone after 10 mins. I wanted the ticket number to get populated in the mail but I am not gettin... See more...
Hi  I am trying to set up an alert with the following query for the tickets that is not assigned to someone after 10 mins. I wanted the ticket number to get populated in the mail but I am not getting the same rather the mail is without the ticket number. index="servicenow"  sourcetype=":incident" |where assigned_to = "" | eval age = now() - _time |where age>600 |table ticket_number, age, assignment_group, team | lookup team_details.csv team as team OUTPUTNEW alert_email, enable_alert | where enable_alert = Y | sendemail to="$alert_email$" subject="Incident no. "$ticket_number$" is not assigned for more than 10 mins - Please take immediate action" message=" Hi Team, This is to notify you that the ticket: "$ticket_number$" is not assigned for more than 10 mins. Please take necessary action on priority"
Hi, I noticed that ingestion latency health check is turning yellow for indexers. Even there is a delay in searching for the data. The index queues shows blocked while checking the internal logs of ... See more...
Hi, I noticed that ingestion latency health check is turning yellow for indexers. Even there is a delay in searching for the data. The index queues shows blocked while checking the internal logs of heavy forwarders.  Could performing a rolling restart of indexers help since it has been a very long time now doing that. 
I am also getting the same error, did it get fix for you? If yes, please do let me know what was the fix?
Hi did you resolved this?
I want to change the msg for a log i.e <list > <Header>.....</Header> <status> <Message>Thuihhh_4y3y27y234yy4 is pending</Message> </status> </list> to <list > <Header>.....</He... See more...
I want to change the msg for a log i.e <list > <Header>.....</Header> <status> <Message>Thuihhh_4y3y27y234yy4 is pending</Message> </status> </list> to <list > <Header>.....</Header> <status> <Message>request is pending</Message> </status> </list>   how can i achieve using rex+sed commands in splunk
I want to extract the  following information make it as a field as "error message" . index=os source="/var/log/syslog" "*authentication failure*" OR "Generic preauthentication failure" Events e... See more...
I want to extract the  following information make it as a field as "error message" . index=os source="/var/log/syslog" "*authentication failure*" OR "Generic preauthentication failure" Events example : Nov 28 01:02:31 server1 sssd[ldap_child[12010]]: Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]: Generic preauthentication failure. Unable to create GSSAPI-encrypted LDAP connection. Nov 28 01:02:29 server2  proxy_child[1939385]: pam_unix(system-auth-ac:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.177.46.57 user=hippm
Hello there, I would like to convert the  default time to the local country timezone and place the converted timezone next to the default one The defaut timezone is Central European time and based ... See more...
Hello there, I would like to convert the  default time to the local country timezone and place the converted timezone next to the default one The defaut timezone is Central European time and based on the country name available in the report, need to conver the timezone. I guess i need to have a lookup table which the coutryname and the timezone of that country Timestamp CountryCode CountryName Region 2023-10-29T13:15:51.711Z BR Brazil Americas 2023-10-30T10:13:19.160Z BH Bahrain APEC 2023-10-30T19:15:24.263Z AE Arab Emirates APEC  
Hi All, We have configured application log monitoring on windows application servers. The log path has a folder where all the _json files are stored. There are more that 300+ json files in each fold... See more...
Hi All, We have configured application log monitoring on windows application servers. The log path has a folder where all the _json files are stored. There are more that 300+ json files in each folder with different time stamps and dates. We have configured inputs.conf as shown below with ignoreOlderThan =2d so that Splunk should not consume more CPU/memory. But still we could see memory and CPU of the application server is going high. Kindly suggest best practice methods so that Splunk universal forwarder wont consume more CPU and memory.   [monitor://C:\Logs\xyz\zbc\*] disabled = false index = preprod_logs interval =300 ignoreOlderThan = 2d
now in props.conf [stream:ip] TRUNCATE = 100000 I will change to 0, I will check, I will return with the answer