All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrang... See more...
I have a dashboard with multiple line charts showing values over time. I want all charts to have the same fixed time (X) axis range, so I can compare the graphs visually. Something like the fixedrange option in the timechart command. However, I use a simple "| table _time, y1, y2, yN" instead of timechart, because I want the real timestamps in the graph, not some approximation due to timechart's notorious binning. To mimic the fixedrange behavior, I append a hidden graph with just two coordinate points (t_min|0) and (t_max|0):   ... | table _time, y1, y2, y3, ..., yN | append [ | makeresults | addinfo | eval x=mvappend(info_min_time, info_max_time) | mvexpand x | rename x as _time | eval _t=0 | table _time, _t ]   This appended search appears very cheap to me - it alone runs in less than 0.5 seconds. But now I realized that it makes the overall search dramatically slower, about x10 in time. The number of scanned events explodes. This even happens when I reduce to:   | append maxout=1 [ | makeresults count=1 ]   What's going on here? I would have expected the main search to run exactly as fast as before, and the only toll should be the time required to add one more line with a timestamp to the end of the finalized table, no?
Dashboard studio has a 1000 item limit because going over those numbers is really hard for the browser. Classic dashboards don't have the limit, but if you create one with 10,000 items it takes many ... See more...
Dashboard studio has a 1000 item limit because going over those numbers is really hard for the browser. Classic dashboards don't have the limit, but if you create one with 10,000 items it takes many seconds to show the list, so 25000 would be rather useless. It's a browser issue more than a Splunk issue. You can't change any limits.conf and your parameter is not even an option. So, as @ITWhisperer says, it's better to structure your dashboard so you have some initial filter, e.g. another dropdown or a text input that is used as a filter to limit the size of the dropdown.
Sorry for the vagueness; the imprecise wording is intentional due to the nature of the environment I work. The network devices' logs get sent to a syslog server.  The syslog server writes all the... See more...
Sorry for the vagueness; the imprecise wording is intentional due to the nature of the environment I work. The network devices' logs get sent to a syslog server.  The syslog server writes all the logs to file in a specific path. On our Server Class server, the Data Input settings is configured to read all the files from that path (its a unique enough path) and sends it to our "network_devices" index.   So the data is being sent to the correct index, but a good portion of the logs are sent to the sourcetype=syslog, rather than the TA's sourcetype. That is where I am stuck. 
That is indeed strange because your setup - according to the specs for both files (inputs and outputs.conf) should work as expected indeed. I supposed you checked with btool what is the effective co... See more...
That is indeed strange because your setup - according to the specs for both files (inputs and outputs.conf) should work as expected indeed. I supposed you checked with btool what is the effective config regarding your inputs and outputs? (especially that nothing overwrites your _TCP_ROUTING) One thing I'd try would be to add the _TCP_ROUTING entries to the [default] stanza and [WinEventLog] (if applicable; I suppose in your case it's not).
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. ... See more...
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. I tried multiple times..But it failed. No error in the UI. But i can see 403 in inspection: POST https://classic.splunkbase.splunk.com/api/v0.1/app/4241/new_release/ 403 (Forbidden) Could you please let me know what is going on here?
Hi! Now that global logon has been implemented, how can you take the value of the cookie needed? I only can get it manually, but need to programmatically obtain it; however, cannot do it requesting l... See more...
Hi! Now that global logon has been implemented, how can you take the value of the cookie needed? I only can get it manually, but need to programmatically obtain it; however, cannot do it requesting login with basic auth anymore.
It only works if you provide the header 'Cookie': 'JSESSIONID=<the_actual_value>;' The actual value can be taken from the dev tools once you're logged in. Just look into the network tab, read any re... See more...
It only works if you provide the header 'Cookie': 'JSESSIONID=<the_actual_value>;' The actual value can be taken from the dev tools once you're logged in. Just look into the network tab, read any requests sent (the request headers).
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure prev... See more...
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure previously managed by another company. We are in charge of AS IS management and, at the same time, perform the migration to new environment.  The Splunk new env setup is done, so now we need to migrate data flow. Following Splunk best pratice, we need to temporarily perform a double data flow: Data must still go from log sources to old env Data must also flow from log sources to new env. We already faced, on another customer, a double data flow, managed using Route and filter data doc and support here on community. So the point is not: we don't know how it works. The issue is: something is not going as expected. So, how the current env is configured? Below, key elements: A set of HFs deployed on customer data center. A cloud HF in charge of collecting data from above HFs and other data input, like network ones. 2 different indexers: they are not on cluster, they are separated and isolated indexer. The first one collect a subset of data forwarded by cloud HF, the second one the remaining one. So, how cloud HF is configured for tcp data routing? In $SPLUNK_HOME$/etc/system/local/inputs.conf, two stanza are configured to receive data on ports 9997 and 9998; configuration is more or less:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group   Then, in $SPLUNK_HOME$/etc/system/local/outputs.conf we have:   [tcpout] defaultGroup=Indexer1_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997   So, the current behavior is: Log collected on port 9997 of Cloud HF are sent to Indexer1 Log collected on port 9998 of Cloud HF are sent to Indexer2 Everything else, like Network Input data, is sent  to Indexer1, thanks to default group settings. At this point, we need to insert new environment hosts; in particular, we need to link a new HFs set. At this phase, as already shared, we need to send data to old env and to new one. We can discuss about avoid to insert another HFs set, but there are some reason about using it and the architecture has been approved by Splunk itself. So, what we have to perform now is: All data are still sent to old Indexer1 and Indexer2. All data must be sent also to new HF set. So, how we tried to perform this? Below there is our changed configuration. inputs.conf:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group, newHFs_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group, newHFs_group      outputs.conf:   [tcpout] defaultGroup=Indexer1_group, newHFs_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997 [tcpout:newHFs_group] disabled=false server=HF1:9997, HF2:9997, HF3:9997     In a nutshell, we tried to achieve: Log collected on port 9997 of Cloud HF are sent to Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent to Indexer2 and new HFs Everything else is sent, thanks to default group settings, to Indexer1 and new HFs So, what went wrong? Log collected on port 9997 of Cloud HF are sent correctly to both Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent correctly to both Indexer2 and new HFs Remaining log are not correctly sent to both Indexer 1 and new HFs. In particular, we should see the following behavior: All logs not collected on port 9997 and 9998, like network data input, are equally sent to Indexer1 and new HFs: a copy to Indexer1 and a copy to new HFs. So, if we have outputs of N logs, we must have 2N logs sent: N to Indexer1 and N to new HFs What we are seeing is: All logs not collected on port 9997 and 9998, like network data input, are auto load balanced and splitted between Indexer1 and new HFs. So, if we have outputs of N logs, we see N sent: we have more or less 80% sent to Indexer1 and remaining 20% to new HFs. I underlined many times that some kind of logs not collected on port 9997 and 9998 are the Network ones, because we are seeing that auto lb and log splitting is happening most of all with them.
With stats command you can use the same field name in both the aggregation function (in your case you want a count of events which yields a field named just count) and the list of fields by which you... See more...
With stats command you can use the same field name in both the aggregation function (in your case you want a count of events which yields a field named just count) and the list of fields by which you split the results (in your case count is also a field name within the event. You can walk around the problem by renaming the field. Like | stats count as event_count by count This way the count of events will not be named count in the results but will be named event_count whereas the field by which you split the results (which comes from your events) will stay named count. Yes, it's a tiny bit confusing. Anyway, I don't see what's the relation between your data and your desired results. And your final table command is completely unnecessary at this point - your results will just contain table of fields count and time after the last stats command so the table command is not needed.
Probably the simplest (assuming the event you posted is an accurate representation of your events) is to use rex to extract the fields. | rex "count:(?<count>\d+) time:(?<time>\d+)ms"
How to setup  Jamf Compliance Reporter Add-on in Splunk. Couldn't find the documentation for this App. Please share if you have it or can walk me through the process. Thank You!
@ITWhisperer  Yes, just the extraction of count and time which is there in log. What is the correct way ? I am new to Splunk.
Is it just a case of extracting count and time from your event? If so, why are you using stats commands?
I want to write the query which will number of count the event occurred and time taken for that.  This is the log  - log: 2024-07-01 16:57:17.022 INFO 1 --- [nio-8080-exec-6] xyztask : FILE_TRANSFE... See more...
I want to write the query which will number of count the event occurred and time taken for that.  This is the log  - log: 2024-07-01 16:57:17.022 INFO 1 --- [nio-8080-exec-6] xyztask : FILE_TRANSFER | Data | LOGS | Fetched count:345243 time:102445ms time: 2024-07-01T16:57:17.022583728Z   I want result like - | count           | time | | 2528945    | 130444 | Query that I am writing  base search | stats count by count | stats count by time | table count time For  stats count by count I am getting error -  Error in 'stats' command: The output field 'count' cannot have the same name as a group-by field Query isn't right, correct solution would be helpful. Also tried different queries different ways.    
Worked on 9.2.1 , the add-on was not running.
If you know all the sourcetypes you are interested in (A, B, C, D, E, F in my example), you could do something like this | timechart span=1d count as event_count by sourcetype usenull=f | foreach A ... See more...
If you know all the sourcetypes you are interested in (A, B, C, D, E, F in my example), you could do something like this | timechart span=1d count as event_count by sourcetype usenull=f | foreach A B C D E F [| eval <<FIELD>>=coalesce(<<FIELD>>,0) | eval <<FIELD>>=if(<<FIELD>>==0,"No events found",<<FIELD>>)]
The eval command is converting bytes into gigabytes.  Add another `/1024` to convert to terabytes. index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usa... See more...
The eval command is converting bytes into gigabytes.  Add another `/1024` to convert to terabytes. index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usage by idx | rename idx as index | eval usage=round(usage/1024/1024/1024/1024,2)  
Hi @vijreddy30 , see in the Splunk Validated Architectures document (https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf) what Splunk means for HA and how to implement it... See more...
Hi @vijreddy30 , see in the Splunk Validated Architectures document (https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf) what Splunk means for HA and how to implement it. For your requirements, it's really difficoult to answer to your question! Maybe there are some replication mechanisms, based on VM-Ware, to do this, but I'm not an expert on VM-Ware and this isn't the location for this question. Ciao. Giuseppe
Yes, the query works - however i want the values to be formatted differently within the search results. I would like the values to show in terabytes.  For example, using the query i get a value of 45... See more...
Yes, the query works - however i want the values to be formatted differently within the search results. I would like the values to show in terabytes.  For example, using the query i get a value of 4587.43 (in GB) for an index ingestion value. I would like this to round and show in Terabytes as 4.59
Hello, I figured it out. It was in the documentation all along. In the map settings you need to go to the Color and Style section, activate the Show base layer option, in the Base layer tile server ... See more...
Hello, I figured it out. It was in the documentation all along. In the map settings you need to go to the Color and Style section, activate the Show base layer option, in the Base layer tile server put the url "https://api.maptiler.com/maps/outdoor/{z}/{x}/{y}.png?key=YourAPIKeyHere"  and below that field select Raster. The URL above is in the Dashboard Studio Maps documentation.