All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I know it has been a while, but did you ever find a solution? I have just encountered this same problem.
@livehybrid  Thank you for the quick response . Let me a give a shot on this will keep posted here how it goes . 
Hi @vikasg  It’s possible to do this by setting the repositoryLocation value in deploymentclient.conf however in terms of supportability and recommendations…well.. I’m not sure you’ll get much posit... See more...
Hi @vikasg  It’s possible to do this by setting the repositoryLocation value in deploymentclient.conf however in terms of supportability and recommendations…well.. I’m not sure you’ll get much positive encouragement to do this as it’s not really recommended. This approach may also break the UI editing capability for managing clients because the UI is not built to manage this setting.  repositoryLocation = $SPLUNK_HOME/etc/apps * The location where content installs when downloaded from a deployment server. * For the Splunk platform instance on the deployment client to recognize an app or configuration content, install the app or content in the default location: $SPLUNK_HOME/etc/apps. * NOTE: Apps and configuration content for deployment can be in other locations on the deployment server. Set both 'repositoryLocation' and 'serverRepositoryLocationPolicy' explicitly to ensure that the content installs on the deployment client in the correct location, which is $SPLUNK_HOME/etc/apps. * The deployment client uses the following 'serverRepositoryLocationPolicy' to determine the value of 'repositoryLocation'. Check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Deploymentclientconf docs for more info.    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Hello Experts ,  I have never done this and wonder if there is a best way to achieve below  I want to use DS to push intial configurations from DS to CM and then use CM as porxy for IDX cluster .  ... See more...
Hello Experts ,  I have never done this and wonder if there is a best way to achieve below  I want to use DS to push intial configurations from DS to CM and then use CM as porxy for IDX cluster .  I tried below  1) added CM as client and for the serverclass I have added 'stateOnClient = noop' to each of the app entries just to make sure those application does not work locally no the CM  2) after above step confs lands on cm under /opt/splunk/etc/apps ,however I want them to land to /opt/splukn/etc/master-apps question is can DS put the directories in a diffferent locations than default that is /opt/splunk/etc/apps
No, I am not using proxy however when I set the Time Format, Time_prefix and MAX_TIMESTAMP_LOOKAHEAD it started working. Thanks for your help.
@livehybrid Thank you for your quick response. It is a custom method.  But does HEC not helps? or any apps?
Hello There  Ultra Champ,   Thanks for quick response. I will take a look at the process. Thanks again, @eholz1
Thanks. The server name is not in the sourcepath, but it is in the log right after the date. Jul 27 23:01:51.020755 SDNWISEP0077 0018346907 (...) I tried to use extract field from the event view, b... See more...
Thanks. The server name is not in the sourcepath, but it is in the log right after the date. Jul 27 23:01:51.020755 SDNWISEP0077 0018346907 (...) I tried to use extract field from the event view, but that didn't work
Hi @cboillot Does the host value have the correct source name, or does this show the syslog server too?  Does the syslog server write the files to a folder structure that contains the source hostna... See more...
Hi @cboillot Does the host value have the correct source name, or does this show the syslog server too?  Does the syslog server write the files to a folder structure that contains the source hostname that you need within it? e.g. /var/log/syslog/<deviceName/blah.log?  If so you would be able to use the host_segment value https://docs.splunk.com/Documentation/Splunk/9.4.2/Admin/Inputsconf#:~:text=host_regex%27.%0A*%20No%20default.-,host_segment%20%3D%20%3Cinteger%3E,-*%20If%20set%20to to specify the host as the source of the log. AFAIK, ise_servername ultimately comes from the 'host' field. If you cannot do the host_segment then another option is to use a REGEX props/transform to extract this from the raw event (assuming it is present there)? If the other option isnt possible and you'd like some further help wrtiting this then please provide a sample event.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @eholz1  This is possible with some custom JS with a classic dashboard, but not yet possible with Dashboard Studio. Rather then me pasting the whole process here, check out https://community.spl... See more...
Hi @eholz1  This is possible with some custom JS with a classic dashboard, but not yet possible with Dashboard Studio. Rather then me pasting the whole process here, check out https://community.splunk.com/t5/Dashboards-Visualizations/Render-HTML-code-from-search-result-in-Splunk-dashboard/m-p/221935 which gives an example of how to achieve this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @OC34  Can you confirm that the following search returns data for you? index=cyberwatch sourcetype="cyberwatch:syslog" The app you are referring to mentions it dashboards Cyberwatch information... See more...
Hi @OC34  Can you confirm that the following search returns data for you? index=cyberwatch sourcetype="cyberwatch:syslog" The app you are referring to mentions it dashboards Cyberwatch informations, such as vulnerabilities, rules, assets, scans,discoveries but doesnt specifically mention syslog. Which dashboard is it you are looking to replicate?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @RanjiRaje  The appends definitely aren't needed here, as this runs a search for that data each time in order to do the lookup - instead you could look to do something like this: Replace the thr... See more...
Hi @RanjiRaje  The appends definitely aren't needed here, as this runs a search for that data each time in order to do the lookup - instead you could look to do something like this: Replace the three append branches with a single lookup that matches on any of the three possible keys, then keep the latest event per host/IP.   | loadjob savedsearch="userid:search:hostslists" | eval host=upper(host) | lookup lookupname Hostname as host OUTPUTNEW Hostname as H1, IP as IP1 | lookup lookupname IP as host OUTPUTNEW IP as IP2, Hostname as H2 | lookup lookupname AltName as host OUTPUTNEW AltName as A3, IP as IP3, Hostname as H3 | eval Hostname=coalesce(H1,H2,H3), IP=coalesce(IP1,IP2,IP3) | eval starttime=relative_time(now(),"-10d@d") | where latest>=starttime | stats max(latest) as latest by host, Hostname, IP | eval "Last event date"=strftime(latest,"%d %b %Y") | table host Hostname IP "Last event date" | rename host AS 'Host referred in Splunk' Let me know how you get on or if any bits need tweaking or explaining  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
This issue occurs only with certain apps, such as Search and Reporting, ITSI, and a few other applications. but works seamlessly on certain apps. The screenshots here are taken from the Search & Repo... See more...
This issue occurs only with certain apps, such as Search and Reporting, ITSI, and a few other applications. but works seamlessly on certain apps. The screenshots here are taken from the Search & Reporting app. works with _internal index doesn't work with other indexes.  In order for it to work, I need to extend the preset time beyond the earliest time passed inside the search. I have not seen this behavior earlier.
| loadjob savedsearch="userid:search:hostslists" | lookup lookupname Hostname as host OUTPUTNEW Hostname,IP | eval Host=upper(host)    | append         [| loadjob savedsearch="userid:search:hosts... See more...
| loadjob savedsearch="userid:search:hostslists" | lookup lookupname Hostname as host OUTPUTNEW Hostname,IP | eval Host=upper(host)    | append         [| loadjob savedsearch="userid:search:hostslists"          | lookup lookupname IP as host OUTPUTNEW IP,Hostname          | eval Host=upper(host)]    | append         [| loadjob savedsearch="userid:search:hostslists"          | lookup lookupname AltName as host OUTPUTNEW AltName,IP,Hostname          | where AltName != Hostname          | eval Host=upper(host)] | eval starttime=relative_time(now(),"-10d@d"),endtime=relative_time(now(),"-1d@d") | convert ctime(latest),ctime(starttime),ctime(endtime) | where latest<=endtime AND latest>=starttime | rename latest as "Last event date", Host as "Host referred in Splunk" | eval Hostname=if('Host referred in Splunk'!='IP','Host referred in Splunk',Hostname) | stats count by Hostname,IP,"Host referred in Splunk","Last event date" | fields - count | dedup IP,Hostname   In my query I am using the saved search "hostslists" (it contains list of hosts reporting to splunk along with latest event datetime) Lookup "lookupname" (contains fields: Hostname, AltName,IP) Aim: Have to get the list of devices present in lookup which is not reporting for more than 10 days Logic: some devices report with "Hostname", some devices reprot with "AltName", few devices report with "IP"        So, I am checking all the 3 fields and capturing "Last event date"     Now, I am facing challenge,  Hostname               IP              "Last event date" Host1                  ipaddr1               25th July                 (by referring IP) Host1                  ipaddr1               10th June                 (by referring Hostname)   I have 2 different "Last event date" for same "Hostname" & "IP".  In my report, it is not showing the latest date, but Here I have to consider latest date, I am stuck how to use such logic. Can anyone please help ? Thanks for your response
Hello, I tried to import App Dashboard for Cyberwatch but dashboard display empty data. My understanding, for the Data input, i should select the following: sourcetype = "cyberwatch:syslog" ... See more...
Hello, I tried to import App Dashboard for Cyberwatch but dashboard display empty data. My understanding, for the Data input, i should select the following: sourcetype = "cyberwatch:syslog" app context = "Cyberwatch (SA-cyberwatch)" index name = "cyberwatch" But if i check the content of the dashboard, there is other source type: cbw:group cbw:node cbw:vuln ... Can you clarify to make the Dashboard work from Cyberwatch syslog events? Regards, Olivier
Currently we have our Cisco ISE devices being sent to a syslog server and then a forwarder is bringing that into Splunk. We are running into an issue where ise_servername is showing the device name, ... See more...
Currently we have our Cisco ISE devices being sent to a syslog server and then a forwarder is bringing that into Splunk. We are running into an issue where ise_servername is showing the device name, but the Syslog server name. What am I missing? How would I go about fixing this?
Hi @prashanthan1987  Check out https://effinium.com/mulesoft-logs-to-splunk/ which has a step by step guide on setting this up. There are also docs which you might find helpful at https://docs.mule... See more...
Hi @prashanthan1987  Check out https://effinium.com/mulesoft-logs-to-splunk/ which has a step by step guide on setting this up. There are also docs which you might find helpful at https://docs.mulesoft.com/cloudhub/custom-log-appender#:~:text=For%20logs%20to%20both%20flow,configure%20the%20CloudHub%20Log4j%20appender.&text=To%20enable%20the%20Log4j%20appender,Log4j%20appenders%20Log4J2CloudhubLogAppender%20and%20RollingFile%20.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunkers! I am using HEC to send an html file to splunk. The received event contains the html lines of code. The html is a table with some data, and forms a table with the data. Is there a w... See more...
Hello Splunkers! I am using HEC to send an html file to splunk. The received event contains the html lines of code. The html is a table with some data, and forms a table with the data. Is there a way, or how can I create a dashboard from the html text that shows the table?  Maybe another way to say this is: How can I extract the html code from the hec event, and display same on a dashboard?   Thank You So Much, E Holz  
It depends on what data you have in Splunk, or intend to ingest into Splunk. What do your events look like (please share some anonymised / obfuscated, representative examples of what you are dealing... See more...
It depends on what data you have in Splunk, or intend to ingest into Splunk. What do your events look like (please share some anonymised / obfuscated, representative examples of what you are dealing with). Also, is the v4 and v6 address range fixed, i.e. do you know what it is ahead of time? How do you determine whether a device is "using" a v4 or v6 address? Can a device use both? Do you need to know an overall percentage (for a given time period) or do you need it broken down by, for example, days? Please provide more detail about your usecase.
We are looking for feasible to integrate with Mule Cloudhub with Splunk Cloud directly for logs ingestion. Please suggest