All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi guys,  Thank you in advance,  Is it possible to use a value of the search result as a parameter in the |sendmail from=" ? " In the | sendmail to="we can use results.mail_to" but in case o... See more...
Hi guys,  Thank you in advance,  Is it possible to use a value of the search result as a parameter in the |sendmail from=" ? " In the | sendmail to="we can use results.mail_to" but in case of | sendmail from="results.mail_from" don't work. We already disable the security options for this.   like for example  i ndex="main" | eval mail_from = "username@mail.com" | eval mail_to = "username@mail.com" | eval subject = "subject" | table username age country city | sendemail to=$result.mail_to$ from= $result_mail_from$ subject=$results.subject$ message="This is an example message" sendresults=true inline=true format=table sendcsv=true
Good Morning  i have a field that i've called problem_detail in our Helpdesk index. it contains all the types of problems that are logged to us. i would like to only merge those that are associated... See more...
Good Morning  i have a field that i've called problem_detail in our Helpdesk index. it contains all the types of problems that are logged to us. i would like to only merge those that are associated with email queries together. there are about 15 different ones.  index=mmuh_helpdesk sourcetype=mmuh_helpdesk_json | dedup id | fillnull value=NULL | search "problemtype.detailDisplayName"!=*AGRESSO* | eval problem_detail='problemtype.detailDisplayName' | eval problem_detail=replace(problem_detail, "&#8226","") | eval problem_detail=replace(problem_detail, ";","|") | eval techGroupLevel = 'techGroupLevel.levelName' | eval techGroupLevel = replace(techGroupLevel, " "," ") | eval techGroupLevel = replace(techGroupLevel, " ","") | eval techGroupLevel = replace(techGroupLevel, "Level"," Level") | eval location_Name = 'location.locationName' | eval status = 'statustype.statusTypeName' | eval priority = 'prioritytype.priorityTypeName' | eval techGroupId = 'techGroupLevel.id' | eval tech_Name = 'clientTech.displayName' | stats count by problem_detail this spl is giving me the full list of 158 problem details and from there i can see around 15 of these relate to email.  Is there away i can combine the totals from all the problem_details that contain 'email' together.  i tried eval and then coalesce but it didnt work ..:(    thank you         
I'm trying to test Splunk Cloud, have registered for free trial but have not received any email so far from Splunk. Faced similar problem a few times. What do I do in this situation?
Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the ... See more...
Splunk search query retrieves logs from the specified index, host, and sourcetype, filtering them based on various fields such as APPNAME, event, httpMethod, and loggerName. It then deduplicates the events based on the INTERFACE_NAME field and calculates the count of remaining unique events.   Splunk alert monitors the iSell application's request activity logs, specifically looking for cases where no data is processed within the last 30 minutes. If fewer than 2 unique events are found, the alert triggers once, notifying the appropriate parties.   From our end records are processed successfully and we provide the condition to trigger an INC count less than 2 we are getting more than one successful events even alert get triggering and getting INC Please check why we are getting false alert and suggest us index=*core host=* sourcetype=app_log APPNAME=iSell event=requestActivity httpMethod=POST loggerName="c.a.i.p.a.a.a.StreamingActor" | dedup INTERFACE_NAME| stats count
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analy... See more...
Is Oracle Diagnostic Logging ( ODL) format supported in any way by Splunk ? On the forum I have found only one topic regarding it but it had been written 8 years ago ? This format, I read and analyze every day, is used by SOA and OSB diagnostic logs. It is, more or less, like csv structure but instead of tab/space/comma, each value is pakced into brakets Below example with the short descrption [2010-09-23T10:54:00.206-07:00] [soa_server1] [NOTIFICATION] [] [oracle.mds] [tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0] [APP: wsm-pm] "Metadata Services: Metadata archive (MAR) not found." Timestamp, originating: 2010-09-23T10:54:00.206-07:00 Organization ID: soa_server1 Message Type: NOTIFICATION Component ID: oracle.mds Thread ID: tid: [STANDBY].ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)' User ID: userId: <anonymous> Execution Context ID: ecid: 0000I3K7DCnAhKB5JZ4Eyf19wAgN000001,0 Supplemental Attribute: APP: wsm-pm Message Text: "Metadata Services: Metadata archive (MAR) not found." Any solution, hints how to manage it in Splunk ? regards KP.
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top... See more...
Hello All, I am using Trellis format to display unique / distinct count of log sources in our environment.  Below is my query and the dashboard output.  Notice how it shows "distinct_..." at the top of the box .  How to remove this ?  I just want it to show the title OKTA and not the field name or whatever on top of the boxes   Below is my query for the above OKTA log source   | tstats dc(host) as distinct_count where index=okta sourcetype="OktaIM2:log"   Thanks in advance
On splunk user is getting the following error:Could not load lookup=LOOKUP-pp_vms  but admin is not getting any such errors.   that look up file is not present also. What we need to do?
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Coul... See more...
We have a Splunk Dashboard for our Team in Splunk  Cluster. Almost every report item is having exclamation symbol and contains the below message. The issue has been present for the past 1 month. Could you please help me in fixing the issue. Error Details: --------------------- *-199.corp.apple.com] Configuration initialization for /ngs/app/splunkp/mounted_bundles/peer_8089/*_SHC took longer than expected (1145ms) when dispatching a search with search ID remote_sh-*-13.corp.apple.com_2320431658__232041658__search__RMD578320bc0a7e9dada_1709881516.707_378AAA09-A2C2-4B63-B88A-50A6B29A67DF. This usually indicates problems with underlying storage performance."
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specifi... See more...
I have all the relevant data I need from a single source but I am wanting to present it in a way that I can't get it to work. I want to show what departments/user/and the count that are using specific URLs and put them on a single line with the corresponding URL. Team1                User1                 URL1                    Count Team2                User4 Team3                User9 ------------------------------------------------------------------------ Team1                User3                 URL2                    Count Team4                User4                               User12                               User16                               User17 ------------------------------------------------------------------------ Team3                User1                 URL3                    Count Team6                User3 Team10              User12 ------------------------------------------------------------------------ Let me know if I need to clarify anything
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up th... See more...
There seems to be a lot of information about other Cisco VPN technologies (ASA/Firepower/Anyconnect) but I am not finding much relating to FlexVPN (site-to-site) tunnels. Maybe I am not looking up the correct terminology. FlexVPN runs on IOS XE. I have logging configured the same as far as using logging trap informational (default) and noticed that we seem to not be getting a lot of data relating to the specifics with the tunnels, negotiations, etc., from a raw syslog perspective. What we would like to be able to do is monitor the tunnels so whenever a tunnel is brought up, taken down, or source (connection) IPs change. Possibly other things we haven't though of yet, hoping to encounter someone else who has used the same technologies and has something already built out. Thank you in advance.
I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist ... See more...
I have a universal forwarder running on my Domain Controller which only captures logon/logff events. inputs.conf ``` [WinEventLog://Security] disabled = 0 current_only renderXml = 1 whitelist = 4624, 4634 ``` In my Splunk server I set up forwarding to a 3rd party. outputs.conf ``` [tcpout] defaultGroup = nothing [tcpout:foobar] server = 10.2.84.209:9997 sendCookedData = false [tcpout-server://10.2.84.209:9997] ``` props.conf ``` [XmlWinEventLog:Security] TRANSFORMS-Xml=foo ``` Transforms.conf ``` [foo] REGEX = . DEST_KEY=_TCP_ROUTING FORMAT=foobar ``` Before creating/editing these conf files I am still seeing lots of non- Windows events being sent to the destination. With these confs in place I am not seeing any events being forwarded. What's the easiest fix to my conf files so that I only send XMLs to the 3rd party system? Thanks, Billy EDIT: What markup does this forum use? single/triple backticks dont work, nor is <pre></pre>
Hello,   I would like to know if there is any way to integrate Github cloud to Splunk cloud and from splunk how these logs can be forwarded to Rapid 7 SIEM??
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingest... See more...
I  have .gz syslog files but I am unable to fetch all files For each host(abc) it has 23 .tgz files   with extension like syslog.log1.gz until syslog.log.24.gz ...I only see the one with 24 ingested but not others ..for all others in internal logs I see "was already indexed as a non-archive, skipping" Log path /ad/logs/abc/syslog/syslog.log.24.gz Internal logs : 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Archive with path="/ad/logs/abc/syslog/syslog.log.2.gz" was already indexed as a non-archive, skipping. 03-12-2024 14:59:59.590 -0700 INFO ArchiveProcessor [1944346 archivereader] - Finished processing file '/ad/logs/abc/syslog/syslog.log.2.gz', removing from stats> Should I try crcsalt or crclength ?
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 ... See more...
i have a query where my results looks like this Application1-Start Application1-Stop Application2-Start Application2-Stop Application3-Start Application3-Stop 10 4 12 7 70 30 12 8 10 4 3 2 14 4 12 5 16 12 But i want to see the output as shown below is that possible??? Start Start Start Stop Stop Stop Application1 Application2 Application3 Application1 Application2 Application3 10 12 70 4 7 30 12 10 3 8 4 2 14 12 16 4 5 12
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = vol... See more...
Hello, how can I ensure the data being sent to cool_index is rolled to cold when the data is 120 days old? The config I'll use   [cool_index] homePath = volume:hotwarm/cool_index/db coldPath = volume:cold/cool_index/colddb thawedPath = $SPLUNK_DB/cool_index/thaweddb frozenTimePeriodInSecs = 10368000 #120 day retention maxTotalDataSizeMB = 60000 maxDataSize=auto repFactor=auto      am I missing something?
I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very... See more...
I have single instance splunk environment where the license is 100 gb there is another single instance using the same license .and we get data every day around 6 GB both combined . Instance A is very fast but instance B is very slow .(both  have same resources) All searches and dashboards are really slow .For instance if I run a search to do a simple stats for 24 hrs ..it takes 25 seconds when compared to the other one which takes 2 seconds .I checked the job inspection which was showing  dispatch.evaluate.search = 12.84 dispatch.fetch.rcp.phase_0 =7.78  I want to know where should I start checking on the host and what are the steps to be taken 
This is an odd acceleration behavior that has us stumped... If some of you worked with Qualys Technology Add-on before, Qualys dump their knowledge base into a CSV file which we converted to kvStore ... See more...
This is an odd acceleration behavior that has us stumped... If some of you worked with Qualys Technology Add-on before, Qualys dump their knowledge base into a CSV file which we converted to kvStore with the following collections.conf accelerations enabled - The knowledge base has approx. 137,000 rows of about 20 columns. [qualys_kb_kvstore] accelerated_fields.QID_accel = {"QID": 1} replicate = true Then if you were to run the following query with lookup local= true and local=false (default). According to Job Inspector there was no real difference between lookup on search head vs. the indexers. Without the lookup command, the query takes 3 seconds to complete over 17 million events. With lookup added, it takes an extra 165 seconds for some reason with the accelerators turned on. index=<removed> (sourcetype="qualys:hostDetection" OR sourcetype="qualys_vm_detection") "HOSTVULN" | fields _time HOST_ID QID | stats count by HOST_ID, QID | lookup qualys_kb_kvstore QID AS QID OUTPUTNEW PATCHABLE | where PATCHABLE="YES" | stats dc(HOST_ID) ```Number of patchable hosts!``` An idea I am going to try is to add PATCHABLE as another accelerated field and see if that changes. This change will require me to wait until tomorrow. accelerated_fields.QID_accel = {"QID": 1, "PATCHABLE": 1} Is there something we're missing to help avoid the lookup taking extra 2-3 minutes?
I have a weird date/time value:  20240307105530.358753-360 I would like to make it more user friendly  2024/03/07 10:50:30 and drop the rest. %Y/%m/%d %H:%M:%S I know you can use sed for this,... See more...
I have a weird date/time value:  20240307105530.358753-360 I would like to make it more user friendly  2024/03/07 10:50:30 and drop the rest. %Y/%m/%d %H:%M:%S I know you can use sed for this, however, I am not familiar with sed syntax: For example: | rex mode=sed field=_raw "s//g" Any sed guru's out there?
I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file... See more...
I'm collecting all other logs ie. wineventlogs, splunkd logs the inputs.conf is accurate the splunk user has full access to the file   What are some non-splunk reasons that would prevent a file from being monitored?
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can... See more...
if i had to write a document for myself on basic learning of splunk: to create a dashboard i can either use inputs like index,source,source fields or I can give a data set is that right? for that can i write it like this or am i wrong with side headings: Understanding of input data:  Explore different methods of data input into Splunk, such as ingesting data from files, network ports, or APIs. Understanding of Data domains : Discover how to efficiently structure your data in Splunk using data models to drive analysis.