All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi There!    I would like to find the values of host that were in macro 1 but not in macro 2 search 1   `macro 1` | fields host   search 2   `macro 2` | fields host   macro ... See more...
Hi There!    I would like to find the values of host that were in macro 1 but not in macro 2 search 1   `macro 1` | fields host   search 2   `macro 2` | fields host   macro 1 host a b c d macro 2 host a b e f Result Count - 2 because host c and d were not in macro 2 Thanks in Advance!
Yes, I have already created output.conf file and added the required info. It is placed under the etc/system/local/ folder. [tcpout] defaultGroup = default-autolb-group indexAndForward = 0 nego... See more...
Yes, I have already created output.conf file and added the required info. It is placed under the etc/system/local/ folder. [tcpout] defaultGroup = default-autolb-group indexAndForward = 0 negotiateProtocolLevel = 0 sslCommonNameToCheck = *.<<stack>>.splunkcloud.com sslVerifyServerCert = true useClientSSLCompression = true [tcpout-server://inputs1.<<stack>>.splunkcloud.com:9997] [tcpout-server://inputs2.<<stack>>.splunkcloud.com:9997] [tcpout-server://inputs14.align.splunkcloud.com:9997] [tcpout:default-autolb-group] disabled = false server = 54.85.90.105:9997, inputs2.<<stack>>.splunkcloud.com:9997, inputs3.<<stack>>.splunkcloud.com:9997, ..... inputs15.<<stack>>.splunkcloud.com:9997 [tcpout-server://inputs15.<<stack>>.splunkcloud.com:9997] sslCommonNameToCheck = *.<<stack>>.splunkcloud.com sslVerifyServerCert = false sslVerifyServerName = false useClientSSLCompression = true autoLBFrequency = 120 [tcpout:scs] disabled=1 server = stack.forwarders.scs.splunk.com:9997 compressed = true
Hello Splunkers!! index=messagebus "AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName"="ASR/Hb/*/Entry*" OR "AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName"="ASR/Hb/*... See more...
Hello Splunkers!! index=messagebus "AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName"="ASR/Hb/*/Entry*" OR "AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName"="ASR/Hb/*/Exit*" | stats count by "AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName" |fields - _raw | fields AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName | rex field=AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName "(?<location>Aisle\d+)" | fields - AsrLocationStatusUpdate.AsrLocationStatus.LocationQualifiedName |strcat "raw" "," location group_name | stats count BY location group_name   Current visualisation I am getting by above search in column chart:      I want to obtain below visualization. Please guide me what changes I need to used in my current SPL to obtain below visualization.    
Hi, Thank you very much for your help. Below is the final query and it is giving me the required output, however I am not able to open the events on a separate tab. index="app_cleo_db" origname="... See more...
Hi, Thank you very much for your help. Below is the final query and it is giving me the required output, however I am not able to open the events on a separate tab. index="app_cleo_db" origname="GEAC_Payroll*" | rex "\sorigname=\"GEAC_Payroll\((?<digits>\d+)\)\d{8}_\d{6}\.xml\"" | search origname="*.xml" | eval Date = strftime(_time, "%Y-%m-%d %H:00:00") | eval DateOnly = strftime(_time, "%Y-%m-%d") | transaction DateOnly, origname | timechart span=1h count | where count>0 | timewrap series=exact time_format="%d-%m-%Y" 1day | eval _time=strftime(_time, "%H:%M:%S") | sort _time
Yes, I have created output.conf file and added the required info. It is placed under etc/system/local/ folder. tcpout] defaultGroup = default-autolb-group indexAndForward = 0 negotiateProtocol... See more...
Yes, I have created output.conf file and added the required info. It is placed under etc/system/local/ folder. tcpout] defaultGroup = default-autolb-group indexAndForward = 0 negotiateProtocolLevel = 0     sslCommonNameToCheck = *.<<stack>>.splunkcloud.com sslVerifyServerCert = true useClientSSLCompression = true [tcpout-server://inputs1.<<stack>>.splunkcloud.com:9997] [tcpout-server://inputs2.<<stack>>.splunkcloud.com:9997] [tcpout-server://inputs14.align.splunkcloud.com:9997] [tcpout:default-autolb-group] disabled = false server = 54.85.90.105:9997, inputs2.<<stack>>.splunkcloud.com:9997, inputs3.<<stack>>.splunkcloud.com:9997, ..... inputs15.<<stack>>.splunkcloud.com:9997 [tcpout-server://inputs15.<<stack>>.splunkcloud.com:9997] sslCommonNameToCheck = *.<<stack>>.splunkcloud.com sslVerifyServerCert = false sslVerifyServerName = false useClientSSLCompression = true autoLBFrequency = 120 [tcpout:scs] disabled=1 server = stack.forwarders.scs.splunk.com:9997 compressed = true
And here is the solution | eval row=mvrange(0,6) | mvexpand row | addinfo | eval _time=case(row=0,info_min_time,row=1,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=2,strptime(StartTime,"%Y-%m-%d %H:%... See more...
And here is the solution | eval row=mvrange(0,6) | mvexpand row | addinfo | eval _time=case(row=0,info_min_time,row=1,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=2,strptime(StartTime,"%Y-%m-%d %H:%M:%S"),row=3,strptime(EndTime,"%Y-%m-%d %H:%M:%S"),row=4,strptime(EndTime,"%Y-%m-%d %H:%M:%S"),row=5,info_max_time) | eval value=case(row=0,0,row=1,0,row=2,1,row=3,1,row=4,0,row=5,0) | table _time, value
I am using Splunk 9.0.4 and I need to make a query where I extract data from a main search. So I am interested in results from the main search:   stage=it sourcetype=some_type NOT trid="<null>... See more...
I am using Splunk 9.0.4 and I need to make a query where I extract data from a main search. So I am interested in results from the main search:   stage=it sourcetype=some_type NOT trid="<null>" reqest="POST /as/*/auth *"   But then I need filter out results from the main search, using a subsearch that operates on a different data set, using a value from a field from the main search, let's call it trid, and trid is a string that might be part of a  value called message in a subsearch. There might be more results in the subsearch, but if there is at least one result in a subsearch then the result from the main search stays in the main search, if not it should not be included in the main search. So I am interested only in the results from the main search, and the subsearch is only used to filter out some of them that does not match.   stage=it sourcetype=some_type NOT trid="<null>" reqest="POST /as/*/auth *" | fields trid [ search stage=it sourcetype=another_type | eval matches_found=if(match(message, "ID=PASSLOG_" + trid), 1, 0) | stats max(matches_found) as matches_found ] | where matches_found>0   After a few hours I cannot figure out how to make it. What is wrong with it? Please advise.
Yes I want it to color the entire row if the importer_in_csv = 0
Given your search, you have a multi-value field - if you coloured this it would be the whole field, not just the importer that was missing. Is this what you really want?
Hi All, I have been trying to extract userids which has special characters in it but with no luck. For ex let's say a field name uid contains two userids one is "roboticts@gmail.com" and the other ... See more...
Hi All, I have been trying to extract userids which has special characters in it but with no luck. For ex let's say a field name uid contains two userids one is "roboticts@gmail.com" and the other one is "difficult+1@gmail.com". Now I want to write a query which could extract only the uid with + sign in it.  Please help on this
Hi, I've built the add-on using Add-on Builder which gathers some data from user including an API key (type o f the field is password so it replaces API key with asterisks on the input creation page)... See more...
Hi, I've built the add-on using Add-on Builder which gathers some data from user including an API key (type o f the field is password so it replaces API key with asterisks on the input creation page). During the creation of an input I can see that the API key is not encrypted an passed to new_input request as a plain text in the payload body. It only happens if the API key is valid. Is there any way to remove or hide the API key there?
OK. The question is where are you getting this token from. Because apparently it's a formatted number which indeed might cause the error.
OK. So you have your UF pointed at the Cloud inputs, not at your HF. You should set your output to your HF.
Yes, I am trying to send the data to splunk cloud. The log file i am trying to receive from UF. [root@HFNode bin]# telnet inputs2.align.<<stack>>.com 9997 Trying 54.159.30.2... Connected to i... See more...
Yes, I am trying to send the data to splunk cloud. The log file i am trying to receive from UF. [root@HFNode bin]# telnet inputs2.align.<<stack>>.com 9997 Trying 54.159.30.2... Connected to inputs2.<<stack>>.splunkcloud.com. Escape character is '^]'. ^C^C^CConnection closed by foreign host. Connected successfully.
When I use my code, I can see this error. " Error in 'where' command : The operator at ',127.542 - 0.001' is invalid. The problem code is this. | where time >= $max_value$ - 0.001  When I print "... See more...
When I use my code, I can see this error. " Error in 'where' command : The operator at ',127.542 - 0.001' is invalid. The problem code is this. | where time >= $max_value$ - 0.001  When I print "max_value"  at title, I can see that value is "315,127.542"   I think the reason this problem occurred is  ',' at the max_value.  How could I remove ',' at the max_value? And If it was not the problem, How could I solve this?
Will do. Thanks for the response.
yes , we have the connectivity. splunk.exe cmd btool outputs list UF node: [tcpout-server://UFnode:9997] [tcpout:default-autolb-group] server =UFnode:9997 HF:   Not getting the logs in ... See more...
yes , we have the connectivity. splunk.exe cmd btool outputs list UF node: [tcpout-server://UFnode:9997] [tcpout:default-autolb-group] server =UFnode:9997 HF:   Not getting the logs in splunk while using the index="_internal" host=""
Did you upgrade to v9.1.2 already?  If so, I suggest you create a ticket at support. The replacing was a temp work-around solution for us from v9.1.0.2. On our test server I was curious if this w... See more...
Did you upgrade to v9.1.2 already?  If so, I suggest you create a ticket at support. The replacing was a temp work-around solution for us from v9.1.0.2. On our test server I was curious if this work-around would still work on v9.1.1 - It did not ! So I decide to wait for the release of v9.1.2.  After the release I first tested on our test server and had no problem any more with sending email.  After some days we upgraded to V9.1.2 on our production machine. Nb. Both our servers are running Windows 2019, and now both are on Splunk Enterprise v9.1.2 without problems so far.
Hi @nagesh , it seems that there's a block in connections between UF and HF. At first: did you enabled receiving on HF? did you enabled forwardring to the HF on the UF? Then, check the connect... See more...
Hi @nagesh , it seems that there's a block in connections between UF and HF. At first: did you enabled receiving on HF? did you enabled forwardring to the HF on the UF? Then, check the connection using telnet on the port you're using (default 9997). If it's all ok, yiou should have, in your Splunk (not on the HF), the Splunk internal logs from that UF: index=_internal host=<your_UF_hostname> Ciao. Giuseppe
OK. 1. What is your setup? You seem to be trying to send the data to Cloud, right? 2. This is a log from where? UF or HF? Because it's trying to send to cloud directly. So if it's the UF's log, you... See more...
OK. 1. What is your setup? You seem to be trying to send the data to Cloud, right? 2. This is a log from where? UF or HF? Because it's trying to send to cloud directly. So if it's the UF's log, your output is not properly configured. If it's a HF's log, then you don't have your network port open on the firewall. 3. What's the whole point of pushing the data from UF via HF? Remember than UF sends data cooked but HF sends the data parsed which means roughly 6x the bandwidth (and you don't get to parse the data on the indexers so some parts of your configuration might not work the way you expect).