All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need a query that will compare run statistics from a list of jobs (msg.jobName = RLMMTP*) that run everyday. The statistics that need compared are the Actual Start Time and The Expected Start Time... See more...
I need a query that will compare run statistics from a list of jobs (msg.jobName = RLMMTP*) that run everyday. The statistics that need compared are the Actual Start Time and The Expected Start Time. The Actual Start Time is found fromin splunk logs(timestamp), while Expected Start Time needs to be manually entered into the query (does not come from splunk). Here is an example of what the output would like: msg.jobName: ActualStart: ExpectedStart: RLMMTP89 15:31:17 20:30:11 RLMMTP72 12:34:21 13:22:46 Here is the query I have so far: sourcetype="cf" source=l_cell msg.jobName = RLMMTP* | spath "msg.status.Code" | search "msg.status.Code"=*| spath "msg.Type" | search "msg.Type"=* | spath "msg.message" | search "msg.message"="RECORD PROCESSED" | eval day = strftime(_time, "%d") | stats earliest(timestamp) as startTime, latest(timestamp) as endTime count by msg.jobName | eval startTime=substr(startTime,1,13) | eval startTimeD=strftime(startTime/1000, "%H:%M:%S") | eval ExpectedStart="15:31:17" | eval ActualStart= startTimeD | append [| makeresults | eval ExpectedStart="09:58:24" | eval ActualStart= startTimeD] | append [| makeresults | eval ExpectedStart="17:56:30." | eval ActualStart= startTimeD] | table msg.jobName ActualStart ExpectedStart However, each job has a different Expected Start Time, which this query does not feature. Any help is appreciated
The subject states the question.... is there a limit on how many sub search I can use within a single query. While doing queries, as an example: index="MyIndex" source="MySource" computerName=*... See more...
The subject states the question.... is there a limit on how many sub search I can use within a single query. While doing queries, as an example: index="MyIndex" source="MySource" computerName=* | append [ search index="MyOtherIndex" source="MyOtherSource" computerName=* | table computerName Everything works as expected up to here If I add another | append, the search just keeps on parsing search. No results, no errors. The time preset does not seem to be an issue, I have used from 5 minutes up to 30 minutes where I can confirm there is a result for queries individually.
I setup syslog output forwarding per the Splunk docs, but am not seeing anything being sent out nor receiving it on the endpoint. Here is what I have applied on the heavyforwarder outputs.conf ... See more...
I setup syslog output forwarding per the Splunk docs, but am not seeing anything being sent out nor receiving it on the endpoint. Here is what I have applied on the heavyforwarder outputs.conf [tcpout] defaultGroup = indexer_group,forwarders_syslog useACK = true [tcpout:indexer_group] server = indexer_ip_address:indexer:port clientCert = xxxxxxxx maxQueueSize = 20MB sslPassword = xxxxxxxxx [tcpout:forwarders_syslog] server = syslog_ip:syslog_port clientCert = xxxxxxx maxQueueSize = 20MB sslPassword = xxxxxxxx blockOnCloning = false dropClonedEventsOnQueueFull = 10 useACK = false Note :- The configuration for forwarding the data to syslog can be found under [tcpout:forwarders_syslog] The following errors are found on splunkd.log when the heavy forwarder trying to forward the logs to syslog server WARN TcpOutputProc - Cooked connection to ip=syslog_ip:syslog_port timed out ERROR TcpOutputFd - Connection to host=syslog_ip:syslog_port failed WARN TcpOutputFd - Connect to syslog_ip:syslog_port failed. Connection refused Also I do not see any connection issues when I'm trying to trouble shoot as follows :- In heavy forwarder :- Tried to telnet to the syslog server from heavyforwarder with the specified port and see that it's got conected. In receiving server netstat -tnlp | grep rsyslog Tried the above and see that the specified port in Heavy forwarder is listening in TCP Not sure where and what else should I be checking to transfer the data whatever the heavyforwarder is currently transffering to Indexer also to a syslog server.
I have an inventory dashboard. I'd like to have a checkbox to "show these items only". My thinking is, when checked, the search will be applied. when unchecked, the search will be ignored. hope th... See more...
I have an inventory dashboard. I'd like to have a checkbox to "show these items only". My thinking is, when checked, the search will be applied. when unchecked, the search will be ignored. hope that makes sense.
I need to copy all the customizations in splunk like Apps, saved searches, User Profile and all the config files . Everything thats customized so that whenever i do a fresh build of spunk i can push ... See more...
I need to copy all the customizations in splunk like Apps, saved searches, User Profile and all the config files . Everything thats customized so that whenever i do a fresh build of spunk i can push back all the customizations in one shot to the new build. Any script/wayout would be of real help. Thanks
hi is it possible to use a submitButton in a panel? I would like to add it in the xml below thanks <input type="text" token="tok_filterhost" searchWhenChanged="true"> <label>... See more...
hi is it possible to use a submitButton in a panel? I would like to add it in the xml below thanks <input type="text" token="tok_filterhost" searchWhenChanged="true"> <label>Filter by hostname</label> <default>*</default> <initialValue>*</initialValue> </input>
Hi, I have a field called SESSION_ID which has a value "0cdWYCu982HhTjoSYMUgnrCIW8c1apbU!1706637738!1581997108157" I want to trim or modify the string from the second exclamation to the last to m... See more...
Hi, I have a field called SESSION_ID which has a value "0cdWYCu982HhTjoSYMUgnrCIW8c1apbU!1706637738!1581997108157" I want to trim or modify the string from the second exclamation to the last to make it look like this "0cdWYCu982HhTjoSYMUgnrCIW8c1apbU!1706637738" Can someone please help me out!
I am trying to use an eval but there is a wildcard so I noticed this does not work. Ho can I get this to work? I tried using match or Like but I cant get it working ......count(eval(logger ="blabl... See more...
I am trying to use an eval but there is a wildcard so I noticed this does not work. Ho can I get this to work? I tried using match or Like but I cant get it working ......count(eval(logger ="blablabla test HTTP status: 200.")) OR logger="something id * HTTP status: 200") AS Example
Hello, I currently have a Splunk universal forwarder on a few of my windows servers. The UF config is received by my Splunk deployment server. I have .exe processes that are currently utilizing muc... See more...
Hello, I currently have a Splunk universal forwarder on a few of my windows servers. The UF config is received by my Splunk deployment server. I have .exe processes that are currently utilizing much of my license and would like to disable Splunk from indexing those processes. All .exe processes I want to ignore are in the c:\Program Files (x86)\Camera System Center 6* subdirectory. I included * for all of them. Would I just add something like below to the universal forwarder config file in the deployment server to achieve my goal? (Pound sign here#) Disable Camera Process Monitoring [monitor:c:\Program Files (x86)\Camera System Center 6\*] disabled = 1 Thank you!
Hello, We have a relatively small network on a remote location that needs to forward logs onto our Splunk Instance, this remote system has particularly low bandwidth per its location. During o... See more...
Hello, We have a relatively small network on a remote location that needs to forward logs onto our Splunk Instance, this remote system has particularly low bandwidth per its location. During our Splunk original architecture call we were advised to setup a syslog collector on this remote network, and setup a scheduled time were logs can be forwarded to the Main Splunk instance for indexing with the idea being setting this task for overnight hours. Since ALL systems are Windows based in this remote location, the current question we face is, in your experience - Which is the preferred syslog collector for Windows that will easily integrate with Splunk? is Syslog-NG the best/most common application for this task? thank you,
Hi, i currently use the WinRegMon Stanza within the inputs.conf. Currently i monitor all changes within the User Software Hive. But there is one Path that i want to exclude. So i tried using the b... See more...
Hi, i currently use the WinRegMon Stanza within the inputs.conf. Currently i monitor all changes within the User Software Hive. But there is one Path that i want to exclude. So i tried using the blacklist feature, but it didnt work. See my config attached: hive = \REGISTRY\USER\.\Software\\?. blacklist1 = \REGISTRY\USER\.\Software\Classes\.\MuiCache\\?.* proc=.* That blacklist doesnt work - can someone spot the failure? Thanks in advance
Is it possible to use Heavy Forwarder servers as a Deployment Server? Because we have a current implementation scenario that would be 15000 workstations sending logs through UF to 07 Heavy Forwarders... See more...
Is it possible to use Heavy Forwarder servers as a Deployment Server? Because we have a current implementation scenario that would be 15000 workstations sending logs through UF to 07 Heavy Forwarders. I already have a Deployment Server, but I am afraid it will have CPU load problems and I wanted to use the 07 Heavy Forwarders as intermediary DS. It's possible? Thanks in advance.
Hello, I have a filename that i need to extract the date from : cvs.2020-02-10.3.log I understand that a modification is needed in the datetime.xml file... can someone help with CDATA tag an... See more...
Hello, I have a filename that i need to extract the date from : cvs.2020-02-10.3.log I understand that a modification is needed in the datetime.xml file... can someone help with CDATA tag and regex for that type of log name ? I have the time extracted correctly from with inside the log. Thanks Ran
Hello Guys since a couple of weeks, I have an issue with an application flow map that does not appears consistently. It appeared for a few hours twice then disappeared.  The issue started a few da... See more...
Hello Guys since a couple of weeks, I have an issue with an application flow map that does not appears consistently. It appeared for a few hours twice then disappeared.  The issue started a few days after we have updated our controller to latest version 4.5.17.2791 and only in one application. Sometime, Load and Response graph are empty, sometime no. Error graph is always present. From left side menu, - Business transaction page is accessible, I can see status by BT and see their dashboards, but I cannot drill-down to their nodes. - Tiers and nodes, Databases calls and Remote services are all appearing in their pages but focusing on one of them, flow map is empty. Did someone faced the same issue, and then, can share his/her experience with me? For tech support guys, there is ticket opened on my account if one want to have a look to screenshots & logs. Best Daniel
Hello, I need to generate an automatic lookup to match certain hosts for a project i'm working on. the thing is, I have a list of server in my scope, but this list contains sometimes only hostname... See more...
Hello, I need to generate an automatic lookup to match certain hosts for a project i'm working on. the thing is, I have a list of server in my scope, but this list contains sometimes only hostnames, and other times the full FQDN, and that may differ from what I have on my host field on splunk metadata. example of the csv: "host" ,"description" host1, dboraclehost1 host2, dboraclehost2 host3.mydomain.net, dboraclehost3 host4, "host4" host5.dathost,net, "thehost5" and in splunk, on my host field I may have: host1.mydomain.net host5 host3 host4,thedomain.com If that can be achievable via UI would be the best, but I can still do it with the .conf files. best regards!
I have a search that returns results for the previous three months for multiple entities. Due to the large number of entities I want to limit the search to the top 10. This is the search: sourcet... See more...
I have a search that returns results for the previous three months for multiple entities. Due to the large number of entities I want to limit the search to the top 10. This is the search: sourcetype=escada_message Message=FAILED AOR_Group=Gas NOT ACKNOWLEDGD NOT DELETED | rex field=Message "(?[A-Za-z]+\s[A-Za-z]+)" | eval Month=strftime(_time,"%m/%Y") | chart count over Message by Month
Hi, I am not able to fetch the full JSON payload using the scripted input in the Splunk cloud. Here, I have installed a universal forwarder that is connected to Splunk cloud, then I have created... See more...
Hi, I am not able to fetch the full JSON payload using the scripted input in the Splunk cloud. Here, I have installed a universal forwarder that is connected to Splunk cloud, then I have created a simple app that will run the python script. The script will give the API output in the JSON format. While fetching the event in the search app I am not able to see my full JSON payload which is truncating at the 10000 characters where my payload is having more than 30000 characters. As per the suggestions from the Splunk answer which we change the limits.conf and props.conf files in the /system/default still not able to sort out. Please come up with solutions. Thank you.
How can I get the time difference between two fields below TIA
Hey guys, I'm trying to complete a report to show the top web users in my environment that are accessing websites that fall under a certain category. My search thus far : index="proxi" sourcet... See more...
Hey guys, I'm trying to complete a report to show the top web users in my environment that are accessing websites that fall under a certain category. My search thus far : index="proxi" sourcetype="prxy" src="*" |stats count by src category url |where count > 1 |sort - count This produces results 1 line at a time. However, what I'd like to accomplish is a cumulative number of categories for each user (src) and all the urls associated with those categories. So my table would look something like this: src category url XX.XXX.XX.X Advertisements https://ib.adnxs.com Information Technology https://btlr.sharethrough.com Web Collaboration https://portal.engilitycorp.com XX.XXX.XX.X Search Engines and Portals https://www.gstatic.com News and Media https://smetrics.cnn.com Business and Economy https://ssc.33across.com I am not totally convinced that my method is the most efficient so I'm open to suggestions
Hi Everyone, The scenario here is: Email data being ingested by Splunk. We want to be able to give access to this index to a number of people but confidential information is sometimes presented ... See more...
Hi Everyone, The scenario here is: Email data being ingested by Splunk. We want to be able to give access to this index to a number of people but confidential information is sometimes presented in the subject line of an email and the subject is a part of the log. Is it possible to block just this field on certain roles but not all? Thank you