All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Will do. Thanks for the response.
yes , we have the connectivity. splunk.exe cmd btool outputs list UF node: [tcpout-server://UFnode:9997] [tcpout:default-autolb-group] server =UFnode:9997 HF:   Not getting the logs in ... See more...
yes , we have the connectivity. splunk.exe cmd btool outputs list UF node: [tcpout-server://UFnode:9997] [tcpout:default-autolb-group] server =UFnode:9997 HF:   Not getting the logs in splunk while using the index="_internal" host=""
Did you upgrade to v9.1.2 already?  If so, I suggest you create a ticket at support. The replacing was a temp work-around solution for us from v9.1.0.2. On our test server I was curious if this w... See more...
Did you upgrade to v9.1.2 already?  If so, I suggest you create a ticket at support. The replacing was a temp work-around solution for us from v9.1.0.2. On our test server I was curious if this work-around would still work on v9.1.1 - It did not ! So I decide to wait for the release of v9.1.2.  After the release I first tested on our test server and had no problem any more with sending email.  After some days we upgraded to V9.1.2 on our production machine. Nb. Both our servers are running Windows 2019, and now both are on Splunk Enterprise v9.1.2 without problems so far.
Hi @nagesh , it seems that there's a block in connections between UF and HF. At first: did you enabled receiving on HF? did you enabled forwardring to the HF on the UF? Then, check the connect... See more...
Hi @nagesh , it seems that there's a block in connections between UF and HF. At first: did you enabled receiving on HF? did you enabled forwardring to the HF on the UF? Then, check the connection using telnet on the port you're using (default 9997). If it's all ok, yiou should have, in your Splunk (not on the HF), the Splunk internal logs from that UF: index=_internal host=<your_UF_hostname> Ciao. Giuseppe
OK. 1. What is your setup? You seem to be trying to send the data to Cloud, right? 2. This is a log from where? UF or HF? Because it's trying to send to cloud directly. So if it's the UF's log, you... See more...
OK. 1. What is your setup? You seem to be trying to send the data to Cloud, right? 2. This is a log from where? UF or HF? Because it's trying to send to cloud directly. So if it's the UF's log, your output is not properly configured. If it's a HF's log, then you don't have your network port open on the firewall. 3. What's the whole point of pushing the data from UF via HF? Remember than UF sends data cooked but HF sends the data parsed which means roughly 6x the bandwidth (and you don't get to parse the data on the indexers so some parts of your configuration might not work the way you expect).
You should be able to create a calculated field called diff with the following value tostring(strptime(receive_time, "%Y/%m/%d %H:%M:%S") - strptime(start_time, "%Y/%m/%d %H:%M:%S"), "duration")  
Hi @judywatson , Splunk deploy it's compatibility at kernel level not at Operating systems level: Splunk 9.1.2 is certified on Linux Kernel >= 3. For my experience, I never found issues using Splun... See more...
Hi @judywatson , Splunk deploy it's compatibility at kernel level not at Operating systems level: Splunk 9.1.2 is certified on Linux Kernel >= 3. For my experience, I never found issues using Splunk On Red Hat. You can find more details at https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements Put attention to the considerations and configurations that you have to do on the Operative System (ulimit and THP). Ciao. Giuseppe
Hi, I'm trying to create a query which will display events matching following conditions: 5 or more different destination IP, one IDS attack name, all in 1h. I tried to use following: index=ids | ... See more...
Hi, I'm trying to create a query which will display events matching following conditions: 5 or more different destination IP, one IDS attack name, all in 1h. I tried to use following: index=ids | streamstats count time_window=1h by dest_ip attack_name | where count (attack_name=1 AND dest_ip>=5) but it is not accepted by Splunk so I presume it has to be written differently. Could somebody help me please?
Hi @Mr_Adate , sorry I forgot a field, please try this: | inputlookup ABC.csv | eval lookup="ABC.csv" | fields Firewall_Name lookup | append [ | inputlookup XYZ.csv | eval lookup="XYZ.csv" | rena... See more...
Hi @Mr_Adate , sorry I forgot a field, please try this: | inputlookup ABC.csv | eval lookup="ABC.csv" | fields Firewall_Name lookup | append [ | inputlookup XYZ.csv | eval lookup="XYZ.csv" | rename Firewall_Hostname AS Firewall_Name | fields Firewall_Name lookup ] | chart count OVER lookup BY Firewall_Name Ciao. Giuseppe
I had the same issue with sending emails. I was able to resolve it by replacing the sendemail.py file in $SPLUNKetc\apps\search\bin, with an older version of sendemail.py. I still have the issue with... See more...
I had the same issue with sending emails. I was able to resolve it by replacing the sendemail.py file in $SPLUNKetc\apps\search\bin, with an older version of sendemail.py. I still have the issue with endless loading on settings pages. How were you able to resolve this?
@ITWhisperer Thank you for the quick revert. It worked!
Hi gcusello,   thank you very much for your prompt reply. I appreciate that   I tried with you code but I guess something is wrong with last line code. I am getting 0 result. can you please confi... See more...
Hi gcusello,   thank you very much for your prompt reply. I appreciate that   I tried with you code but I guess something is wrong with last line code. I am getting 0 result. can you please confirm it again?
In SimpleXML, certain characters must be entered with HTML entities. (Specifically, double quotes, greater than, less than, and so on.)  More generally, GET URLs are best encoded without special char... See more...
In SimpleXML, certain characters must be entered with HTML entities. (Specifically, double quotes, greater than, less than, and so on.)  More generally, GET URLs are best encoded without special characters.  So, replace | eval last_found = strftime(last_found, "%c") with %3D%20strftime(last_found%2C%20%22%25c%22)  Meanwhile I do not know how the cited URL could "works fine till."  If you are entering these in source editor, you can try replacing double quotes with &quot;, i.e., | eval last_found = strftime(last_found, &quot;%c&quot;) I recommend using the visual editor, however.  There, you can enter SPL as SPL.
I am trying to send the data from client machine (UF) installed and Heavy forwarder installed on other machine. But i am getting the below error. 12-06-2023 10:01:22.626 +0100 INFO  ClientSessio... See more...
I am trying to send the data from client machine (UF) installed and Heavy forwarder installed on other machine. But i am getting the below error. 12-06-2023 10:01:22.626 +0100 INFO  ClientSessionsManager [3779231 TcpChannelThread] - Adding client: ip=10.112.73.20 uts=windows-x64 id=86E862DA-2CDC-4B21-9E37-45DFF4C5EFBE name=86E862DA-2CDC-4B21-9E37-45DFF4C5EFBE 12-06-2023 10:01:22.626 +0100 INFO  ClientSessionsManager [3779231 TcpChannelThread] - ip=10.112.73.20 name=86E862DA-2CDC-4B21-9E37-45DFF4C5EFBE New record for sc=100_IngestAction_AutoGenerated app=splunk_ingest_actions: action=Phonehome result=Ok checksum=0 12-06-2023 10:01:24.551 +0100 INFO  AutoLoadBalancedConnectionStrategy [3778953 TcpOutEloop] - Removing quarantine from idx=3.234.1.140:9997 connid=0 12-06-2023 10:01:24.551 +0100 INFO  AutoLoadBalancedConnectionStrategy [3778953 TcpOutEloop] - Removing quarantine from idx=54.85.90.105:9997 connid=0 12-06-2023 10:01:24.784 +0100 ERROR TcpOutputFd [3778953 TcpOutEloop] - Read error. Connection reset by peer 12-06-2023 10:01:25.028 +0100 ERROR TcpOutputFd [3778953 TcpOutEloop] - Read error. Connection reset by peer 12-06-2023 10:01:28.082 +0100 WARN  TcpOutputProc [3779070 indexerPipe_1] - The TCP output processor has paused the data flow. Forwarding to host_dest=inputs10.align.splunkcloud.com inside output group default-autolb-group from host_src=prdpl2splunk02.aligntech.com has been blocked for blocked_seconds=60. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
this is my search: |search index=****** interfaceName=* |stats values(interfaceName) as importer |join type=lest [|search index=****** Code=* [|inputlookup importers.csv |table... See more...
this is my search: |search index=****** interfaceName=* |stats values(interfaceName) as importer |join type=lest [|search index=****** Code=* [|inputlookup importers.csv |table interfaceName] |lookup importers.csv interfaceName OUTPUTNEW system timeRange |where like(system, "%") |stats values(system) as reality values(timeRange) as max_time |eval importer_in_csv=if(isnull(max_time),0,1) I want to color the importer column if importer_in_csv = 0 How do I do it in XML? thanks!!
This is, what I want to achieve. 3.jpg. Time range from the time range picker. In this case the day 05.12.2023.
Yes, if i search for any field and value, the events are filtering based on my search, but the fields are not getting extracted.
That's.... wierd. If you search, for example, for UserName=*, you get events but those events don't show the UserName field?  
You can do | timechart span=1h count | where count>0 | timewrap 1day  To filter out "empty" results.
Hi, I would like to know if it is possible to filter all the graphs in my dashboard by clicking on a portion in another graph of the same dashboard, in order to achieve a cross filter behavior. I am... See more...
Hi, I would like to know if it is possible to filter all the graphs in my dashboard by clicking on a portion in another graph of the same dashboard, in order to achieve a cross filter behavior. I am using dashboard studio. Thank you Best regards