All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Don't believe it will work for Splunk Cloud trials.  Docs:  https://docs.splunk.com/observability/en/logs/intro-logconnect.html Region and version availability Splunk Log Observer Connect is avail... See more...
Don't believe it will work for Splunk Cloud trials.  Docs:  https://docs.splunk.com/observability/en/logs/intro-logconnect.html Region and version availability Splunk Log Observer Connect is available in the AWS regions us0, us1, eu0, jp0, and au0, and in the GCP region us2. Splunk Log Observer Connect is compatible with Splunk Enterprise versions 9.0.1 and higher, and Splunk Cloud Platform versions 9.0.2209 and higher. Log Observer Connect is not available for Splunk Cloud Platform trials.
Okay let me back up.  One sourcetype contains the correlation logs with src_ip as it's primary identifier.  the other sourcetype is our threat logs where we see far more data about destination, url, ... See more...
Okay let me back up.  One sourcetype contains the correlation logs with src_ip as it's primary identifier.  the other sourcetype is our threat logs where we see far more data about destination, url, app, etc.  I want to create a search that takes the IPs from the correlation logs and looks for the same src_ip in the threat logs within a range of 1-2 hours and returns a detailed table describing what could have caused the correlation event to be created. Is this possible to do without using an outputlookup?   Also this index has a datamodel that I could leverage where nodenames are log.threat and log.correlation  
Here is a picture of my results. Hoping to get some help into having the second column populate urlrulelabel, apprulelabel, and rulelabel policies rather than just one.
It’s not giving the expected result. This is a lot better than a phrase we hear too often: "It doesn't work." This said, what is the expected result?  To ask an answerable data analytics questi... See more...
It’s not giving the expected result. This is a lot better than a phrase we hear too often: "It doesn't work." This said, what is the expected result?  To ask an answerable data analytics question, follow these golden rules; nay, call them the four commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Lol almost there, but a million miles away. I attempted something similar, but didn't fair well. Thanks a million.  Still working through a few new modules, but learning more each day. 
This is a little confusing.  You are almost there: index=web sourcetype=access_combined status=200 productId=* |timechart sum(price) as DailySales count as UnitsSold Is there something else we need... See more...
This is a little confusing.  You are almost there: index=web sourcetype=access_combined status=200 productId=* |timechart sum(price) as DailySales count as UnitsSold Is there something else we need to know?
Yes, such use cases are quite common, simple, and it is not always appropriate to use lookup table.  In fact, correlation search is the most fundamental strength of Splunk.  Meanwhile, you do want to... See more...
Yes, such use cases are quite common, simple, and it is not always appropriate to use lookup table.  In fact, correlation search is the most fundamental strength of Splunk.  Meanwhile, you do want to consider whether it is appropriate to compare the two sourcetypes in the same time search period. This said, your final table is not very illustrative for the statement "make a table using fields from sourcetype B that do not exist in sourcetype A" because IP is nowhere in that table.  Mind-reading 1: I will insert src_ip into the table.  More critically, you did not illustrate what you mean exactly by "compare (IPs from sourcetype A) against a larger set of IPs".  In the end result, do you want to list IPs in sourcetype B that do not exist in sourcetype A?  Mind-reading 2: I will assume no on this. index=paloalto (sourcetype=sourcetype_B OR sourcetype=sourcetype_A) | stats values(field_A) as field_A values(field_B) as field_B values(field_C) as field_C values(sourcetype) as sourcetype by src_ip | where sourcetype == sourcetype_A | fields - sourcetype Here, the filter uses a side effect of Splunk's equality comparator on multivalue fields. (There are more semantically expressive alternatives but most people just use this shortcut.)
Stuck again and not sure what I'm missing... I have the first two steps, but cannot figure out the syntax to use Timechart to count all events as a specific label. Any help is greatly appreciated.  ... See more...
Stuck again and not sure what I'm missing... I have the first two steps, but cannot figure out the syntax to use Timechart to count all events as a specific label. Any help is greatly appreciated.  The Task:  Use timechart to calculate the sum of price as "DailySales" and all count all events as "UnitsSold". What I have so far:  index=web sourcetype=access_combined status=200 productId=* |timechart sum(price) as DailySales
How to show the avg and their status in Flow Map viz connections. index=gc source="log" QUE_NAM="S*" | stats sum(eval(FINAL="MQ SUCCESS")) as good sum(eval(FINAL="CONN FAILED")) as errors sum(e... See more...
How to show the avg and their status in Flow Map viz connections. index=gc source="log" QUE_NAM="S*" | stats sum(eval(FINAL="MQ SUCCESS")) as good sum(eval(FINAL="CONN FAILED")) as errors sum(eval(FINAL="MEND FAIL")) as warn avg(QUE_DEP) as queueAvgDept by QUE_NAM | eval to=QUE_NAM, from="internal" | append [search index=es sourcetype=queue_monitor queue_name IN ("*Q","*R") | bucket _time span=10m | stats max(current_depth) as max_Depth avg(current_depth) as avg_Depth by _time queue_name queue_manager | eval to=queue_name, from="external"] For this query, i got below visualization and i need to connect between internal and external one ( highlighted in red color and how to show the avg count through the flow in between  external and name) Please help me out on this Thanks in advance!  
I believe it is due to my lack of understanding on how the indexers in an indexer cluster treat locally monitored data versus data forwarded to the indexer cluster. I mistakenly thought that locally ... See more...
I believe it is due to my lack of understanding on how the indexers in an indexer cluster treat locally monitored data versus data forwarded to the indexer cluster. I mistakenly thought that locally monitored logs on each indexer don't get treated the same way as logs that were forwarded to the indexer cluster. Thank you for pointing out on the infinite loop, I guess this was the issue when I tried to configure the indexer to forward locally monitored data to its own indexer cluster, which made them spew out alot of errors.  In that case it seems that I should just create an `inputs.conf` on the indexers and monitor whatever I want, as the indexers' logs would get indexed and subsequently replicated, if I'm understanding it correctly.  Thank you for your help!
What I tend to do to get all the results in email or Slack is to use stats as described here https://community.splunk.com/t5/Reporting/Using-result-fieldname-in-email-text-body-splunk-email-alert/m-... See more...
What I tend to do to get all the results in email or Slack is to use stats as described here https://community.splunk.com/t5/Reporting/Using-result-fieldname-in-email-text-body-splunk-email-alert/m-p/399711  
While most instance types should forward their logs to the indexers (using outputs.conf), indexers must not do so lest they cause an infinite loop.  By virtue of the fact the indexer is part of the c... See more...
While most instance types should forward their logs to the indexers (using outputs.conf), indexers must not do so lest they cause an infinite loop.  By virtue of the fact the indexer is part of the cluster, its logs go through the cluster. What problem are you trying to solve?
Run a btool to confirm, but it looks like you have a '[default]' stanza inadvertently assigning the incorrect sourcetype. I'd check for the following in /opt/splunk/etc/apps/splunk_ta_onelogin/local/... See more...
Run a btool to confirm, but it looks like you have a '[default]' stanza inadvertently assigning the incorrect sourcetype. I'd check for the following in /opt/splunk/etc/apps/splunk_ta_onelogin/local/inputs.conf: [default] sourcetype=onelogin:user  
I just set up a free Duo account and installed and configured the add-on without issues. The only other things I can suggest are 1) verify your Splunk instance's public egress address is in the Admin... See more...
I just set up a free Duo account and installed and configured the add-on without issues. The only other things I can suggest are 1) verify your Splunk instance's public egress address is in the Admin API application's "Networks for API access" list, 2) verify any intervening host or network firewalls or transparent proxies allow connectivity to your API hostname, and 3) verify your Splunk host can connect to your API hostname using openssl: $SPLUNK_HOME/bin/splunk cmd openssl s_client -connect api-xxx.duosecurity.com:443 The Duo Admin API Python client used by the add-on supports HTTP proxies, but Duo didn't include proxy support in the modular input. If you need this feature, you'll need to request it from Duo.
I'm trying to create a search where I take a small list of IPs from sourcetype A and compare them against a larger set of IPs in sourcetype B.  I will then make a table using fields from sourcetype B... See more...
I'm trying to create a search where I take a small list of IPs from sourcetype A and compare them against a larger set of IPs in sourcetype B.  I will then make a table using fields from sourcetype B that do not exist in sourcetype A to create a more detailed look of the events involving the IP. Is there a way to do this without using a lookup table? index=paloalto (sourcetype=sourcetype_B OR sourcetype=sourcetype_A) | eval small_tmp=case(log_type="CORRELATION", src_ip) | eval large_tmp=case(log_type!="CORRELATION", src_ip) | where match(small_tmp, large_tmp) | table field A, field B, field C  
Hi @ririzk, _ssl.c is part of Python, not Splunk. A quick look at a non-specific version of the _ssl.c source code shows that error is returned when a connection is closed unexpectedly. You should c... See more...
Hi @ririzk, _ssl.c is part of Python, not Splunk. A quick look at a non-specific version of the _ssl.c source code shows that error is returned when a connection is closed unexpectedly. You should contact Duo support for more detail.
Hi @newbie77, If an instance of Field1=Start is always the earliest event by uid and Field2=Finish is always the latest event by uid, you can use the stats range() function: | stats range(_time) ... See more...
Hi @newbie77, If an instance of Field1=Start is always the earliest event by uid and Field2=Finish is always the latest event by uid, you can use the stats range() function: | stats range(_time) as duration by uid Otherwise, use the stats min() and max() or earliest() and latest() functions with an eval expression: | stats min(eval(case(Field1=="Start", _time))) as start_time max(eval(case(Field2=="Finish"))) as finish_time by uid | eval duration=finish_time-start_time
Hi Thanks for the feedback. We can have a lot of row, I will have a look at the other app cheers Rob
This is very old, but did anyone ever figure this out? We've had a ticket open for a month now about this exact issue and have chased down every error message and possible conf change. If anyone out ... See more...
This is very old, but did anyone ever figure this out? We've had a ticket open for a month now about this exact issue and have chased down every error message and possible conf change. If anyone out there has a possible solution or suggestion for this that would be awesome!    Thanks everyone!!
I've three search in OR for ex  "order success" "order failed" "offer success"  based on the above 3 statement I can perform search but I want to show the result in as pie chart at per hour basis