All Topics

Top

All Topics

Hi, I've been asked to add inputs to my organization's Splunk Enterprise from Cisco Routing and Switching Gear. I remember reading in the documentation that Splunk's recommended best practice in is ... See more...
Hi, I've been asked to add inputs to my organization's Splunk Enterprise from Cisco Routing and Switching Gear. I remember reading in the documentation that Splunk's recommended best practice in is to use a syslog-collector running a UF or HF between the Routers/Switches and Indexers to keep the Indexers from becoming flooded with incoming traffic.   I had thought to do this with a Linux machine running a UF or HF and ingesting Router/Switch log traffic before forwarding the traffic along to Splunk Enterprise but I've been told an additional machine, even a virtual one, is a no go. I don't believe a Windows syslog-collector would work either as Windows can't do this with native tools, and I've been told 3rd party software is also a no-go.   Could anyone recommend a solution that doesn't use a Linux syslog-collector, a Windows syslog-collector running 3rd party software,  that meets Splunk's best practice suggestions?   Thank you, 603_Dan 
Hi - I briefly need to ensure that events from one UF (multiple sources) are duplicated in two indexes on one index cluster (developmentmumbledashboardupdatesetc). I can't find any references in th... See more...
Hi - I briefly need to ensure that events from one UF (multiple sources) are duplicated in two indexes on one index cluster (developmentmumbledashboardupdatesetc). I can't find any references in the docs; looks like most people want to know how to NOT duplicate events. Anyone done this ?  Got advice for how to proceed ? Thanks, -Rob  
Hello, I have some issues writing a PROPS configuration file for the following  source data stored in text file. I  also used TIMESTAMP_FIELDS= timeStamp there, to have field values under field name... See more...
Hello, I have some issues writing a PROPS configuration file for the following  source data stored in text file. I  also used TIMESTAMP_FIELDS= timeStamp there, to have field values under field names.  But, it's not working. My PROPS configuration and a sample event are given below.  Any help will be highly appreciated. Thank you so much.    [ __auto__learned__ ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIMESTAMP_FIELDS=timeStamp TIME_PREFIX =^\{\"timeStamp\"\:\" TIME_FORMAT=%Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD=29   {"timeStamp":"2021-06-21 14:53:56 EDT","appName":"OSD","userType":"FILTER","StatCd":null,"Amt":null,"errorMsg":"","eventId":"APP_ENTRY","eventType":"VIEW","fileSourceCd":null,"ipAddr":"11.212.41.151","mftCd":null,"outputCd":null,"planNum":null,"reasonCd":null,"returnCd":"00","sessionId":"XWGMwkncVD0m60OQBOahu8s/qG1c=","Period":null,"cat":"234207501","Type":null,"userId":"cdabea740a-g9a0-408f-a6a7-5ae70c689e6d","vsardata":{"uri":"/osd/rest/accountSummary","host":"appsa.rup.afsiep.net","ipAddress":"11.212.41.151","Id":"AXSabea753c-d9a0-408f-a6a7-5ae70c689e6d","requestId":"as58510cd-0459-614b7bc4-1afdd700-0bf875285d76","referer":https://saada.ruer.egsiep.net/osd/,"responseStatus":0}}  
I am currently working on a project that requires me to set up db connect to have data from mongodb. When I am trying the unity jdbc driver, it isn't working. Can anyone please tell me a workaround, ... See more...
I am currently working on a project that requires me to set up db connect to have data from mongodb. When I am trying the unity jdbc driver, it isn't working. Can anyone please tell me a workaround, ir any miss I might have not noticed? It is quite urgent, thanks in anticipation.
Hi all, I'm setting up an alerting process that monitors different servers on a single index and sends an alert out if no events are fired over a 24 hour period. It's set to run at midnight and lo... See more...
Hi all, I'm setting up an alerting process that monitors different servers on a single index and sends an alert out if no events are fired over a 24 hour period. It's set to run at midnight and look back over the last 24 hours. If no events are found on any of the hosts, it should send an email with the details of that host. I'd like to set up 1 alert, if possible, rather than setting up an alert for each host.  I should specify that host is internal, not the 'splunk_server', but each host is one of our servers. Here's what I tried so far: index=#### sourcetype=#### source=/usr/app/*/logs/#####.txt | rex field=source "\/[^\/]+\/[^\/]+\/(?<env>[^\/]+)\/.*"                // extract our the environment | stats count by host                                                                                            // count by host | lookup ######.csv serverLower AS host output IP   // add in the ip of the host to add to the table | table env, host, IP, count                                                         // inline table passed to alert This will give us a correct assessment of count by host, but will just return that count to the email alert. I'd like to only send out an email when a count is 0 for a specific host, and then send only the details of that host with count=0.  Thanks
Hi Splunkers, How to create Incidents on SNOW from Splunk SPL? We have "ServiceNow Event Integration" alert action in use which creates incidents when an alerts triggers an event but trying to use th... See more...
Hi Splunkers, How to create Incidents on SNOW from Splunk SPL? We have "ServiceNow Event Integration" alert action in use which creates incidents when an alerts triggers an event but trying to use the same from Splunk search. Tried using sendalert command as below and got an error: | sendalert servicenow param.severity="4" param.assigned_to="Assignment group" param.short_description="Alert Name" param.description="This is a test" param.u_environment="Dev" param.node=hostname param.resource="Nothing" param.type="Name" Error: Error in 'sendalert' command: Alert action "Servicenow" not found.
Hello, I ahve below list of files in a directory and many more - below are few examples..... 210928105858:jira:HDL-APP004036:/hboprod/itdept/jira/domain/logs:$ ll total 147936 -rw-r--r-- 1 jira j... See more...
Hello, I ahve below list of files in a directory and many more - below are few examples..... 210928105858:jira:HDL-APP004036:/hboprod/itdept/jira/domain/logs:$ ll total 147936 -rw-r--r-- 1 jira jira 376923 Sep 26 23:59 access_log.2021-09-26 -rw-r--r-- 1 jira jira 1547320 Sep 28 00:00 access_log.2021-09-27 -rw-r--r-- 1 jira jira 891543 Sep 28 10:56 access_log.2021-09-28 -rw-r--r-- 1 jira jira 881194 Sep 28 10:02 atlassian-jira-gc-2021-09-20_11-52-13.log.0.current -rw-r--r-- 1 jira jira 208279 Sep 28 10:49 atlassian-jira-gc-2021-09-28_10-04-10.log.0.current -rw-r----- 1 jira jira 8964 Sep 20 11:52 catalina.2021-09-20.log -rw-r--r-- 1 jira jira 8965 Sep 28 10:04 catalina.2021-09-28.log -rw-r--r-- 1 jira jira 768821 Sep 28 10:12 catalina.out -rw-r--r-- 1 jira jira 0 Sep 20 11:52 host-manager.2021-09-20.log -rw-r--r-- 1 jira jira 0 Sep 28 10:04 host-manager.2021-09-28.log -rw-r----- 1 jira jira 0 Sep 17 00:14 localhost.2021-09-17.log -rw-r--r-- 1 jira jira 0 Sep 20 11:52 localhost.2021-09-20.log -rw-r--r-- 1 jira jira 0 Sep 28 10:04 localhost.2021-09-28.log -rw-r--r-- 1 jira jira 0 Sep 20 11:52 manager.2021-09-20.log -rw-r--r-- 1 jira jira 0 Sep 28 10:04 manager.2021-09-28.log I want to monitor catalina.out and access_log files only and not others.   I have configure monitoring stanza for catalina.out and it is working as expected for me. [monitor:////hboprod/itdept/jira/domain/logs/catalina.out] sourcetype = log4j ignoreOlderThan = 7d crcSalt = <string>   I need help for writing monitoring stanza for access_log as this files gets created daily with that days date in it name. How can i configure this files to be monitored?
At our organization we use Splunk with Apache to provide LDAP authentication using smart cards. We are required to provide a consent banner upon typing in the website to get to our Splunk environment... See more...
At our organization we use Splunk with Apache to provide LDAP authentication using smart cards. We are required to provide a consent banner upon typing in the website to get to our Splunk environment.  I don't have much knowledge on Apache at all, and googling doesn't have much information on setting up a banner which then redirects me to Splunk after accepting.   Looking for assistance to see if any one else has been able to successfully implement a consent banner for their Splunk environments?    
I think this is a pretty basic question, but I'd appreciate some help with it.  I'm trying to produce an exportable, email-able report (CSV or Excel) of remote worker locations that shows how often p... See more...
I think this is a pretty basic question, but I'd appreciate some help with it.  I'm trying to produce an exportable, email-able report (CSV or Excel) of remote worker locations that shows how often people are logging in from each state.  I'd like to show a count of how many logins per IP, per UserId,  per state on unique days but I haven't been able to find a way to get that count into a table. The basic query I'm using is this: `m365_default_index` sourcetype="o365:management:activity" Workload=AzureActiveDirectory Operation=UserLoggedIn | iplocation ClientIP | search Country="United States" | eval Time=strftime(_time, "%Y-%m-%d %H:%M:%S") | rename Region AS State UserId AS User ClientIP AS "Client IP" | fields "State", "User", "Client IP", "Time" | table State User Time "Client IP" | sort State, User, - Time But it returns more rows than I can output, and I'd rather have a count than just all the rows of logins. If I could figure out how to add a count so that the table's output was more like: State User Client IP "Count of logins on unique days" where I could apply a search filter on some count > x and I could turn that into an email-able report, that would be ideal.
Hello Tripwire-enterprise-add-on-for-splunk was installed on Splunk 8.2.1. I referred to the tripwire document, but the settings page does not run. $SPLUNK_HOME/etc/apps/TA-tripwire_enterprise/loc... See more...
Hello Tripwire-enterprise-add-on-for-splunk was installed on Splunk 8.2.1. I referred to the tripwire document, but the settings page does not run. $SPLUNK_HOME/etc/apps/TA-tripwire_enterprise/local/app.conf [install] python.version = python2 Is there anything that needs to be corrected?    
Hello guys, how to use the source file modification date instead of "guessed" or extracted timestamp from csv file? I'm using specific sourcetype and extracting fields at search time (fields transf... See more...
Hello guys, how to use the source file modification date instead of "guessed" or extracted timestamp from csv file? I'm using specific sourcetype and extracting fields at search time (fields transformations) Thanks. Splunk 7.3.4  
Hi All I have a question and need to do the following: Search contidtion_1 from (index_1 ) and then get the value of field_1 and the value of field_2. then search the value of field_1 from (ind... See more...
Hi All I have a question and need to do the following: Search contidtion_1 from (index_1 ) and then get the value of field_1 and the value of field_2. then search the value of field_1 from (index_2 )  and get value of field_3. I want to have a difference calculation  between  value of field_2 and value of field_3. It it possible to  achieve this using a single query?
Hi, Is there a way to determine if an index has stopped logging/has gone inactive? I have tried looking through the docs, but am new to splunk and trying to figure this out. I know we can use metada... See more...
Hi, Is there a way to determine if an index has stopped logging/has gone inactive? I have tried looking through the docs, but am new to splunk and trying to figure this out. I know we can use metadata for hosts and sourcetypes, but doesn't seem to work for indexes. Any recommendations? Timo
Hi Folks, I need to split  a multiline field     -2.9416067 53.0374031 0.0       the first line is latitude e the second line is longitude,   is possible extract these two fields from this mu... See more...
Hi Folks, I need to split  a multiline field     -2.9416067 53.0374031 0.0       the first line is latitude e the second line is longitude,   is possible extract these two fields from this multi-line field?
I'm trying to match events in transforms.conf on key=value strings. (like EventCode=103 and so on). It wouldn't work unless I did escape the equals sign with backslash. So config entry like REGEX=C... See more...
I'm trying to match events in transforms.conf on key=value strings. (like EventCode=103 and so on). It wouldn't work unless I did escape the equals sign with backslash. So config entry like REGEX=ComputerName=whatever.domain.com Doesn't seem to work, but REGEX=ComputerName\=whatever.domain.com  does. And I generally don't mind it but I would love to see a piece of docs that says that the equals sign has to be ascaped. Normally it doesn't so I have no idea if it's something to do with regex itself, or with conf file parsing. Can anyone point me to a proper doc?
Hi, Let's imagine I have those raws : Name Value1 Value2 foo 1 2 foo12 1 6 foodazd 5 6 fooaoke 4 3 foo56 2 3 bar 1 2 barjodpez 7 4 barjo 7 4 bar125 ... See more...
Hi, Let's imagine I have those raws : Name Value1 Value2 foo 1 2 foo12 1 6 foodazd 5 6 fooaoke 4 3 foo56 2 3 bar 1 2 barjodpez 7 4 barjo 7 4 bar125 7 5   I would like to create a search that gives : Name Value1 Value2 foo foo12 foodazd fooaoke foo56 1 2 4 5 2 3 6 bar barjodpez barjo bar125 1 7 2 4 5   So to explain with words, I want to merge raws based on the smallest common substring present in the Name column (here, foo and bar). Thanks for your help.
Hello, I have a problem with my distributed environment where some of my instances appear greyed out under CPU and memory utilization. If I open the specific instance panels I see the red icon with ... See more...
Hello, I have a problem with my distributed environment where some of my instances appear greyed out under CPU and memory utilization. If I open the specific instance panels I see the red icon with the errors below.  This is very strange because it affects the indexers only. The only difference with them is that the web service is disabled a per Splunk best practices. I tried with splunk support by opening a case but we couldn't find the solution yet.   Splunk Version: 8.0.3 (indexers), 8.0.5 (everything else)--- Is this a problem?  [subsearch][servername]Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/server/info?count=0&strict=false from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API. [subsearch][servername]Unexpected status for to fetch REST endpoint uri=https://127.0.0.1:8089/services/server/info?count=0&strict=false from server=https://127.0.0.1:8089 - Bad Request   --- MORE INFO --- If I run the command manually like: https://10.10.10.1:8089/services/server/status/resource-usage/hostwide I get the output in my browser.   I read this post: https://community.splunk.com/t5/Getting-Data-In/Splunk-Management-Console-Error-subsearch-Rest-Processor-https/m-p/326219#M60633  It talks about the indexer role, my Cluster Master is also "SHC Deployer" (Search Head Cluster Deployer), would this be the role I have to move? Its not an Indexer, I have 6 dedicated indexers and 5 dedicated search heads.
Hi, I have a search that contains millions of events and is extremely slow, is there a way to speed it up?   This is the query:   index=audit | table db_name | dedup db_name | outputlookup audit... See more...
Hi, I have a search that contains millions of events and is extremely slow, is there a way to speed it up?   This is the query:   index=audit | table db_name | dedup db_name | outputlookup audit.csv    
Im practicing in creating Dashboards using base search and my dashboard has a piechart wich says "no results found", but when i click on "open in search" i get the results as stats in search and have... See more...
Im practicing in creating Dashboards using base search and my dashboard has a piechart wich says "no results found", but when i click on "open in search" i get the results as stats in search and have no probs to visualize via the visualizationtab. my Dashboard sourcecode:   <dashboard> <label>Basesearch Test</label> <search id="basesearch"> <query>index=fishingbot</query> <earliest>$timepickearliest$</earliest> <latest>$timepicklatest$</latest> </search> <row> <panel> <title>Sessionstart</title> <single> <search base="basesearch"> <query>eval start = strftime($timepickearliest$, "%d.%m.%Y %H:%M:%S") | table start</query> </search> <option name="drilldown">none</option> </single> </panel> <panel> <title>Sessionende</title> <single> <search base="basesearch"> <query>eval end = strftime($timepicklatest$, "%d.%m.%Y %H:%M:%S") | table end</query> </search> <option name="drilldown">none</option> </single> </panel> </row> <row> <panel> <chart> <search base="basesearch"> <query>rex field=message "\"(?&lt;Loot&gt;\w+)\" gefischt" | stats count by Loot</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </dashboard>   The Timepick comes from a Drilldown in another Dashboard where the Botsessions are listed. I tried it with chart instead of stats, but its the same result. Where did i got the flaw?