All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately Dashboard Studio does not have an option for this. Are you able to make the visualization wider to make room for the numbers? Or you calculate the numbers into different units to reduce... See more...
Unfortunately Dashboard Studio does not have an option for this. Are you able to make the visualization wider to make room for the numbers? Or you calculate the numbers into different units to reduce the number of digits.
How did you install TA-pfsense on the Search heads? 
No. In SimpleXML dashboard you can include any custom JS which could potentially make your browser play media file (which might not necessarily be the best idea). Dashboard Studio doesn't let you d... See more...
No. In SimpleXML dashboard you can include any custom JS which could potentially make your browser play media file (which might not necessarily be the best idea). Dashboard Studio doesn't let you do this level of customization.
It looks like service downtime. Especially considering a sudden spike in throughput after a drop - the forwarders were pushing the queued data. Check your splunkd.log immediately before and after th... See more...
It looks like service downtime. Especially considering a sudden spike in throughput after a drop - the forwarders were pushing the queued data. Check your splunkd.log immediately before and after that outage.
It's also worth explaining why the where command is usually way slower than adding another condition to the original search (or adding another search command in the pipeline). Firstly, Splunk is rel... See more...
It's also worth explaining why the where command is usually way slower than adding another condition to the original search (or adding another search command in the pipeline). Firstly, Splunk is relatively smart and when it sees search condition1 | search condition2 it internally optimizes it out and treats it as search condition1 AND condition2 But that's a minor point here. The major point (and that's really very important in understanding why some things work faster with Splunk than others) is _how_ Splunk searches the indexes for data. Your typilcal "other solution" (like RDBMS or some object database which indexes documents) splits the data into discrete fields on ingestion and holds each of those fields in a separate "compartment" (we can call it columns in database table, we can call it object properties, doesn't matter here). So when you have to look for key=value pair, the solution looks into the "drawer" called "key" and looks for "value". Splunk (mostly; the exception being indexed fields) works the other ways around. It stores the "values" in form of tokens into which it splits the input data. And during searching if you search for key=value condition it searches for all events containing the "value" token and parses all of them to see if the value is in the proper place within the event to match the defined extraction for key. Of course the more values you're looking for (because you have separate conditions for many fields containing separate values like key1=value1 AND key2=value2 AND key3=value3 and so on), the lower is the count of events containing all those values at the same time and the fewer events Splunk has to actually parse to see if those field definitions match what you're searching for. So if you're adding more conditions to your search by AND you're telling Splunk to consider fewer and fewer events in your search. But where does not work like that. Where works only as a streaming command and has to process all the events that come from the preceeding command(s). So for example, if you have in your index 100 thousands events of which 10000 contain "value1", 10000 contain "value2" (1000 of them overlap and contain both of those values), if you're searching for index=myindex key1=value1 key2=value2 Splunk has to only parse 1000 events which contain both values at the same time to find if they contain it in places corresponding to key1 and key2 respectively. But if you do index=myindex key1=value1 | where key2="value2" Splunk has to parse all 10000 events containing value1 to see if they match key1. From the resulting set of this search it needs to match all events where key2="value2". Even worse if you just did index=myindex | where key1="value1" AND key2="value2" Splunk then would have to read all 100k events from your index and parse those two fields out of them to later compare their values with the given condition. To show you what difference that can make an example from my home lab box. index=winevents EventCode=4799 EventRecordID=461117 I ran this search over last 30 days. This search has completed and has returned 1 results by scanning 1 events in 0.278 seconds EventRecordID is a pretty unique identifier so Splunk already had only a single record to check. If we move this condition to the where part index=winevents EventCode=4799 | where EventRecordID=461117 We get This search has completed and has returned 1 results by scanning 9,768 events in 1.045 seconds As you can see, Splunk had to do much more work because I had 9768 events which matched the value 4799 (and from the further job inspection which I'm not pasting here I see that all of them were in the EventCode field) and all those events had to be processed further by the where command. It's still relatively fast, because 10k events is not that much but it's about 4 times slower (the difference on bigger sets would be more noticeable - here the big part of the time used is just spawning the search). If we move both conditions to the where part: index=winevents | where EventCode=4799 AND EventRecordID=461117 We still get the same 1 result which is not surprising but... This search has completed and has returned 1 results by scanning 63,740 events in 6.017 seconds I have exactly 63740 events in the winevents index and they all had to be parsed and processed further down the pipeline by the where command. And it's no wonder that since there's about 6 times more events to process than in previous variant it took about 6 times as much time. So yes, where is a fairly sophisticated and flexible command letting you do many things that ordinary search command won't but the tighter you can "squeeze" your indexes with the initial search the better the overall performance.
The exact search to produce a visualization would depend on which fields are extracted for your logs. Assuming they are normalized such that e.g. the field "user" and the field "status" are the same ... See more...
The exact search to produce a visualization would depend on which fields are extracted for your logs. Assuming they are normalized such that e.g. the field "user" and the field "status" are the same between the Windows and RHEL logs, then you could find the 5 users with the most failed logins for the past week with: index=<yourwindowslogindex> OR index=<yourlinuxlogindex> earliest=-7d status="failed" | top limit=5 user If the fields are not normalized, then you may need to extract them. In this case could you post some sanitized samples of the successful and failed login events? They should be retrievable by searching something like: index=<yourindex> (EventCode=4624 OR EventCode=4624 OR "Login")
YES! That's what I'm looking. I have both windows and RHEL machines. I'm using the Cisco network app to track logins to the network on there if that makes sense. I'd like to have it do logins over th... See more...
YES! That's what I'm looking. I have both windows and RHEL machines. I'm using the Cisco network app to track logins to the network on there if that makes sense. I'd like to have it do logins over the course of 7 days with the top 5 users like you were saying. That just makes sense. I'm learning a bunch of stuff.
I wouldn't recommend using ChatGPT to make Splunk searches. It usually generates nonsense and even if the SPL is valid, it tries to do bizarre stuff. It would help if you would specify what kind of ... See more...
I wouldn't recommend using ChatGPT to make Splunk searches. It usually generates nonsense and even if the SPL is valid, it tries to do bizarre stuff. It would help if you would specify what kind of visualization of logins you would like. Do you want a total of successful and failed logins over a time period? Do you want to find the top 5 users with failed logins? Would you like to see a timeline of successful and failed logins over the past 7 days?
Hi splunk team. I wonder which version of Ciber vision is supported by the API realeas v 2.0 for splunk enterprise
Please forgive me, I am new to Splunk. I'm trying to create a dashboard that visualizes successful/failures logins. I don't have anyone I work with that's a professional or even knowledgeable/expe... See more...
Please forgive me, I am new to Splunk. I'm trying to create a dashboard that visualizes successful/failures logins. I don't have anyone I work with that's a professional or even knowledgeable/experienced enough to help. So, I started to use ChatGPT to help develop these strings. After I got the base setup from ChatGPT, I tried to fill in the sourcetypes. But now I'm getting this error: Error in 'EvalCommand': The expression is malformed.  Please let me know what I need to do to fix this. Ask away please. It'll only help me get better.   index=ActiveDirectory OR index=WindowsLogs OR index=WinEventLog ( (sourcetype=WinEventLog (EventCode=4624 OR EventCode=4625)) # Windows logon events OR (sourcetype=ActiveDirectory "Logon" OR "Failed logon") # Active Directory logon events (adjust keywords if needed) ) | eval LogonType=case( EventCode=4624, "Successful Windows Login", EventCode=4625, "Failed Windows Login", searchmatch("Logon"), "Successful AD Login", searchmatch("Failed logon"), "Failed AD Login" ) | eval user=coalesce(Account_Name, user) # Combine Account_Name and user fields | eval src_ip=coalesce(src_ip, host) # Unify source IP or host | stats count by LogonType, user, src_ip | sort - count
Hi @Tiong.Koh, Have you had a chance to review the latest reply? If it answers your question, please click the "Accept as Solution" button, if not, reply back to keep the conversation going. 
To investigate the issue of missing data in Splunk for a period of 3-4 hours, where gaps were observed in the _internal index as well as in performance metrics like network and CPU data, But still ca... See more...
To investigate the issue of missing data in Splunk for a period of 3-4 hours, where gaps were observed in the _internal index as well as in performance metrics like network and CPU data, But still can't able to find out the potential root cause of data missing in Splunk. Please help me what I need to investigate more to find out the potential root cause of the data gap in Splunk. Gap into the _internal index data Network performance data gap is visible Gap in the CPU performance data      
@ITWhisperer   OK , How we can create such a line chart with  X axis as Time ( not _time)  Y axis as count1 count2 count3 
Thank you @gcusello I'll get with support!
I want to make a sound alert in my dashboard studio dashboard. Is it even possible?
No, the y-axis represents a numeric which in your example would be the values from count1, count2 and count3
Hi @jaburke1 , try it, but, as I said, I usually avoid to use automatic lookups. Ciao. Giuseppe
Hi @FPERVIL , I usually deploy on all the Forwarders an app, usually called TA_Forwarders, containing at least three files: app.conf deploymentclient.conf, outputs.conf. in this way I can cent... See more...
Hi @FPERVIL , I usually deploy on all the Forwarders an app, usually called TA_Forwarders, containing at least three files: app.conf deploymentclient.conf, outputs.conf. in this way I can centrally manage both sending data to Indexers and Conncection to Deployment Server. Ciao. Giuseppe