All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @OC34  Can you confirm that the following search returns data for you? index=cyberwatch sourcetype="cyberwatch:syslog" The app you are referring to mentions it dashboards Cyberwatch information... See more...
Hi @OC34  Can you confirm that the following search returns data for you? index=cyberwatch sourcetype="cyberwatch:syslog" The app you are referring to mentions it dashboards Cyberwatch informations, such as vulnerabilities, rules, assets, scans,discoveries but doesnt specifically mention syslog. Which dashboard is it you are looking to replicate?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @RanjiRaje  The appends definitely aren't needed here, as this runs a search for that data each time in order to do the lookup - instead you could look to do something like this: Replace the thr... See more...
Hi @RanjiRaje  The appends definitely aren't needed here, as this runs a search for that data each time in order to do the lookup - instead you could look to do something like this: Replace the three append branches with a single lookup that matches on any of the three possible keys, then keep the latest event per host/IP.   | loadjob savedsearch="userid:search:hostslists" | eval host=upper(host) | lookup lookupname Hostname as host OUTPUTNEW Hostname as H1, IP as IP1 | lookup lookupname IP as host OUTPUTNEW IP as IP2, Hostname as H2 | lookup lookupname AltName as host OUTPUTNEW AltName as A3, IP as IP3, Hostname as H3 | eval Hostname=coalesce(H1,H2,H3), IP=coalesce(IP1,IP2,IP3) | eval starttime=relative_time(now(),"-10d@d") | where latest>=starttime | stats max(latest) as latest by host, Hostname, IP | eval "Last event date"=strftime(latest,"%d %b %Y") | table host Hostname IP "Last event date" | rename host AS 'Host referred in Splunk' Let me know how you get on or if any bits need tweaking or explaining  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
This issue occurs only with certain apps, such as Search and Reporting, ITSI, and a few other applications. but works seamlessly on certain apps. The screenshots here are taken from the Search & Repo... See more...
This issue occurs only with certain apps, such as Search and Reporting, ITSI, and a few other applications. but works seamlessly on certain apps. The screenshots here are taken from the Search & Reporting app. works with _internal index doesn't work with other indexes.  In order for it to work, I need to extend the preset time beyond the earliest time passed inside the search. I have not seen this behavior earlier.
| loadjob savedsearch="userid:search:hostslists" | lookup lookupname Hostname as host OUTPUTNEW Hostname,IP | eval Host=upper(host)    | append         [| loadjob savedsearch="userid:search:hosts... See more...
| loadjob savedsearch="userid:search:hostslists" | lookup lookupname Hostname as host OUTPUTNEW Hostname,IP | eval Host=upper(host)    | append         [| loadjob savedsearch="userid:search:hostslists"          | lookup lookupname IP as host OUTPUTNEW IP,Hostname          | eval Host=upper(host)]    | append         [| loadjob savedsearch="userid:search:hostslists"          | lookup lookupname AltName as host OUTPUTNEW AltName,IP,Hostname          | where AltName != Hostname          | eval Host=upper(host)] | eval starttime=relative_time(now(),"-10d@d"),endtime=relative_time(now(),"-1d@d") | convert ctime(latest),ctime(starttime),ctime(endtime) | where latest<=endtime AND latest>=starttime | rename latest as "Last event date", Host as "Host referred in Splunk" | eval Hostname=if('Host referred in Splunk'!='IP','Host referred in Splunk',Hostname) | stats count by Hostname,IP,"Host referred in Splunk","Last event date" | fields - count | dedup IP,Hostname   In my query I am using the saved search "hostslists" (it contains list of hosts reporting to splunk along with latest event datetime) Lookup "lookupname" (contains fields: Hostname, AltName,IP) Aim: Have to get the list of devices present in lookup which is not reporting for more than 10 days Logic: some devices report with "Hostname", some devices reprot with "AltName", few devices report with "IP"        So, I am checking all the 3 fields and capturing "Last event date"     Now, I am facing challenge,  Hostname               IP              "Last event date" Host1                  ipaddr1               25th July                 (by referring IP) Host1                  ipaddr1               10th June                 (by referring Hostname)   I have 2 different "Last event date" for same "Hostname" & "IP".  In my report, it is not showing the latest date, but Here I have to consider latest date, I am stuck how to use such logic. Can anyone please help ? Thanks for your response
Hello, I tried to import App Dashboard for Cyberwatch but dashboard display empty data. My understanding, for the Data input, i should select the following: sourcetype = "cyberwatch:syslog" ... See more...
Hello, I tried to import App Dashboard for Cyberwatch but dashboard display empty data. My understanding, for the Data input, i should select the following: sourcetype = "cyberwatch:syslog" app context = "Cyberwatch (SA-cyberwatch)" index name = "cyberwatch" But if i check the content of the dashboard, there is other source type: cbw:group cbw:node cbw:vuln ... Can you clarify to make the Dashboard work from Cyberwatch syslog events? Regards, Olivier
Currently we have our Cisco ISE devices being sent to a syslog server and then a forwarder is bringing that into Splunk. We are running into an issue where ise_servername is showing the device name, ... See more...
Currently we have our Cisco ISE devices being sent to a syslog server and then a forwarder is bringing that into Splunk. We are running into an issue where ise_servername is showing the device name, but the Syslog server name. What am I missing? How would I go about fixing this?
Hi @prashanthan1987  Check out https://effinium.com/mulesoft-logs-to-splunk/ which has a step by step guide on setting this up. There are also docs which you might find helpful at https://docs.mule... See more...
Hi @prashanthan1987  Check out https://effinium.com/mulesoft-logs-to-splunk/ which has a step by step guide on setting this up. There are also docs which you might find helpful at https://docs.mulesoft.com/cloudhub/custom-log-appender#:~:text=For%20logs%20to%20both%20flow,configure%20the%20CloudHub%20Log4j%20appender.&text=To%20enable%20the%20Log4j%20appender,Log4j%20appenders%20Log4J2CloudhubLogAppender%20and%20RollingFile%20.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunkers! I am using HEC to send an html file to splunk. The received event contains the html lines of code. The html is a table with some data, and forms a table with the data. Is there a w... See more...
Hello Splunkers! I am using HEC to send an html file to splunk. The received event contains the html lines of code. The html is a table with some data, and forms a table with the data. Is there a way, or how can I create a dashboard from the html text that shows the table?  Maybe another way to say this is: How can I extract the html code from the hec event, and display same on a dashboard?   Thank You So Much, E Holz  
It depends on what data you have in Splunk, or intend to ingest into Splunk. What do your events look like (please share some anonymised / obfuscated, representative examples of what you are dealing... See more...
It depends on what data you have in Splunk, or intend to ingest into Splunk. What do your events look like (please share some anonymised / obfuscated, representative examples of what you are dealing with). Also, is the v4 and v6 address range fixed, i.e. do you know what it is ahead of time? How do you determine whether a device is "using" a v4 or v6 address? Can a device use both? Do you need to know an overall percentage (for a given time period) or do you need it broken down by, for example, days? Please provide more detail about your usecase.
We are looking for feasible to integrate with Mule Cloudhub with Splunk Cloud directly for logs ingestion. Please suggest
I have devices using a specific v4 address range and a specific v6 address range. I'd like to get the percent of devices using the v6 range so we can track the progress of the conversion. I'm new to ... See more...
I have devices using a specific v4 address range and a specific v6 address range. I'd like to get the percent of devices using the v6 range so we can track the progress of the conversion. I'm new to Splunk so I'm not sure how to proceed. 
Hi @markturner14  The easiest way is from the Monitoring Console, Click Settings -> Forwarder Monitoring Setup, then click "Rebuild Forwarder assets..." This will rebuild the lookup table based on ... See more...
Hi @markturner14  The easiest way is from the Monitoring Console, Click Settings -> Forwarder Monitoring Setup, then click "Rebuild Forwarder assets..." This will rebuild the lookup table based on the time period you select.   Alternatively you can use a search  (within splunk_monitoring_console app) or lookup editor to manually delete entries - although rebuild is generally advised instead unless you have so many forwarders that the search would take a long time to run. |inputlookup dmc_forwarder_assets where NOT hostname IN ("host1ToRemove","host2ToRemove) | outputlookup dmc_forwarder_assets    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That causes Splunk to rebuild whole forwarders database from scratch which might be a bit resource-intensive. If someone is brave enough and knows what they're doing, one can try to manually filter ... See more...
That causes Splunk to rebuild whole forwarders database from scratch which might be a bit resource-intensive. If someone is brave enough and knows what they're doing, one can try to manually filter out entries from the dmc_forwarder_assets lookup. But be warned - you might break things and need to rebuild the database anyway.
I have updated my response (I couldn't remember if the default was to format with "OR" or not!)
Hi @ITWhisperer ,  Thank your help.  With your suggestion, I also included a format command to format the output with "OR," which is now working. index=* [| inputlookup hostList.csv | eval... See more...
Hi @ITWhisperer ,  Thank your help.  With your suggestion, I also included a format command to format the output with "OR," which is now working. index=* [| inputlookup hostList.csv | eval string="host=".host."*" | table string | format] Once again, Thank you for the help @ITWhisperer .
In the CMC, go to Forwarders->Forwarder monitoring setup and click the "Rebuild forwarder assets" button.
Hi All, I`m looking to remove missing forwarders, where the servers have been permanently removed, reported by CMC. I cannot see anyway of doing this.  Is this something that i have to raise a ... See more...
Hi All, I`m looking to remove missing forwarders, where the servers have been permanently removed, reported by CMC. I cannot see anyway of doing this.  Is this something that i have to raise a support case for? many thanks Mark
Don't you have some search-time field defined overriding the original one? What does the search log say (especially the LISPY part) when you search for a specific sourcetype?
Hi @livehybrid  I'll assess again with the SQS-s3 Connector, and I'll need to ingest both historic data as well as ongoing data stream. By the initial observations I think I'll need to use multiple... See more...
Hi @livehybrid  I'll assess again with the SQS-s3 Connector, and I'll need to ingest both historic data as well as ongoing data stream. By the initial observations I think I'll need to use multiple SQS-s3 Connectors or would need to use Lambda to process those into single SQS-s3 Connector. Please let me know if there's any other alternative to this assumption. Thanks!
Hi Guys, I'm trying to run a playbook and send an email using the SMTP services but not able to do it. When I tested send email from the SOAR CLI it was working but from the console it's not happeni... See more...
Hi Guys, I'm trying to run a playbook and send an email using the SMTP services but not able to do it. When I tested send email from the SOAR CLI it was working but from the console it's not happening. Can anyone tell me how to send emails from SOAR using "Passwordless" method? Unable to find the instructions or SOP on Splunk.   I've tested the connectivity over port 25 towards the SMTP server, and it's working.