All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That is indeed strange because your setup - according to the specs for both files (inputs and outputs.conf) should work as expected indeed. I supposed you checked with btool what is the effective co... See more...
That is indeed strange because your setup - according to the specs for both files (inputs and outputs.conf) should work as expected indeed. I supposed you checked with btool what is the effective config regarding your inputs and outputs? (especially that nothing overwrites your _TCP_ROUTING) One thing I'd try would be to add the _TCP_ROUTING entries to the [default] stanza and [WinEventLog] (if applicable; I suppose in your case it's not).
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. ... See more...
Hi Experts, It used to be works fine(I uploaded the last version last year..).. But today I am trying to upload a new version for our splunk app: https://splunkbase.splunk.com/app/4241, it failed. I tried multiple times..But it failed. No error in the UI. But i can see 403 in inspection: POST https://classic.splunkbase.splunk.com/api/v0.1/app/4241/new_release/ 403 (Forbidden) Could you please let me know what is going on here?
Hi! Now that global logon has been implemented, how can you take the value of the cookie needed? I only can get it manually, but need to programmatically obtain it; however, cannot do it requesting l... See more...
Hi! Now that global logon has been implemented, how can you take the value of the cookie needed? I only can get it manually, but need to programmatically obtain it; however, cannot do it requesting login with basic auth anymore.
It only works if you provide the header 'Cookie': 'JSESSIONID=<the_actual_value>;' The actual value can be taken from the dev tools once you're logged in. Just look into the network tab, read any re... See more...
It only works if you provide the header 'Cookie': 'JSESSIONID=<the_actual_value>;' The actual value can be taken from the dev tools once you're logged in. Just look into the network tab, read any requests sent (the request headers).
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure prev... See more...
Hi Splunkers, today I have a strange situation that require a well done data sharing by my side, so please forgive me if I'm goint to be long. We are managing a Splunk Enterprise Infrastructure previously managed by another company. We are in charge of AS IS management and, at the same time, perform the migration to new environment.  The Splunk new env setup is done, so now we need to migrate data flow. Following Splunk best pratice, we need to temporarily perform a double data flow: Data must still go from log sources to old env Data must also flow from log sources to new env. We already faced, on another customer, a double data flow, managed using Route and filter data doc and support here on community. So the point is not: we don't know how it works. The issue is: something is not going as expected. So, how the current env is configured? Below, key elements: A set of HFs deployed on customer data center. A cloud HF in charge of collecting data from above HFs and other data input, like network ones. 2 different indexers: they are not on cluster, they are separated and isolated indexer. The first one collect a subset of data forwarded by cloud HF, the second one the remaining one. So, how cloud HF is configured for tcp data routing? In $SPLUNK_HOME$/etc/system/local/inputs.conf, two stanza are configured to receive data on ports 9997 and 9998; configuration is more or less:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group   Then, in $SPLUNK_HOME$/etc/system/local/outputs.conf we have:   [tcpout] defaultGroup=Indexer1_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997   So, the current behavior is: Log collected on port 9997 of Cloud HF are sent to Indexer1 Log collected on port 9998 of Cloud HF are sent to Indexer2 Everything else, like Network Input data, is sent  to Indexer1, thanks to default group settings. At this point, we need to insert new environment hosts; in particular, we need to link a new HFs set. At this phase, as already shared, we need to send data to old env and to new one. We can discuss about avoid to insert another HFs set, but there are some reason about using it and the architecture has been approved by Splunk itself. So, what we have to perform now is: All data are still sent to old Indexer1 and Indexer2. All data must be sent also to new HF set. So, how we tried to perform this? Below there is our changed configuration. inputs.conf:   [<log sent on HF port 9997>] _TCP_ROUTING = Indexer1_group, newHFs_group [<log sent on HF port 9998>] _TCP_ROUTING = Indexer2_group, newHFs_group      outputs.conf:   [tcpout] defaultGroup=Indexer1_group, newHFs_group [tcpout:Indexer1_group] disabled=false server=Indexer1:9997 [tcpout:Indexer2_group] disabled=false server=Indexer2:9997 [tcpout:newHFs_group] disabled=false server=HF1:9997, HF2:9997, HF3:9997     In a nutshell, we tried to achieve: Log collected on port 9997 of Cloud HF are sent to Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent to Indexer2 and new HFs Everything else is sent, thanks to default group settings, to Indexer1 and new HFs So, what went wrong? Log collected on port 9997 of Cloud HF are sent correctly to both Indexer1 and new HFs Log collected on port 9998 of Cloud HF are sent correctly to both Indexer2 and new HFs Remaining log are not correctly sent to both Indexer 1 and new HFs. In particular, we should see the following behavior: All logs not collected on port 9997 and 9998, like network data input, are equally sent to Indexer1 and new HFs: a copy to Indexer1 and a copy to new HFs. So, if we have outputs of N logs, we must have 2N logs sent: N to Indexer1 and N to new HFs What we are seeing is: All logs not collected on port 9997 and 9998, like network data input, are auto load balanced and splitted between Indexer1 and new HFs. So, if we have outputs of N logs, we see N sent: we have more or less 80% sent to Indexer1 and remaining 20% to new HFs. I underlined many times that some kind of logs not collected on port 9997 and 9998 are the Network ones, because we are seeing that auto lb and log splitting is happening most of all with them.
With stats command you can use the same field name in both the aggregation function (in your case you want a count of events which yields a field named just count) and the list of fields by which you... See more...
With stats command you can use the same field name in both the aggregation function (in your case you want a count of events which yields a field named just count) and the list of fields by which you split the results (in your case count is also a field name within the event. You can walk around the problem by renaming the field. Like | stats count as event_count by count This way the count of events will not be named count in the results but will be named event_count whereas the field by which you split the results (which comes from your events) will stay named count. Yes, it's a tiny bit confusing. Anyway, I don't see what's the relation between your data and your desired results. And your final table command is completely unnecessary at this point - your results will just contain table of fields count and time after the last stats command so the table command is not needed.
Probably the simplest (assuming the event you posted is an accurate representation of your events) is to use rex to extract the fields. | rex "count:(?<count>\d+) time:(?<time>\d+)ms"
How to setup  Jamf Compliance Reporter Add-on in Splunk. Couldn't find the documentation for this App. Please share if you have it or can walk me through the process. Thank You!
@ITWhisperer  Yes, just the extraction of count and time which is there in log. What is the correct way ? I am new to Splunk.
Is it just a case of extracting count and time from your event? If so, why are you using stats commands?
I want to write the query which will number of count the event occurred and time taken for that.  This is the log  - log: 2024-07-01 16:57:17.022 INFO 1 --- [nio-8080-exec-6] xyztask : FILE_TRANSFE... See more...
I want to write the query which will number of count the event occurred and time taken for that.  This is the log  - log: 2024-07-01 16:57:17.022 INFO 1 --- [nio-8080-exec-6] xyztask : FILE_TRANSFER | Data | LOGS | Fetched count:345243 time:102445ms time: 2024-07-01T16:57:17.022583728Z   I want result like - | count           | time | | 2528945    | 130444 | Query that I am writing  base search | stats count by count | stats count by time | table count time For  stats count by count I am getting error -  Error in 'stats' command: The output field 'count' cannot have the same name as a group-by field Query isn't right, correct solution would be helpful. Also tried different queries different ways.    
Worked on 9.2.1 , the add-on was not running.
If you know all the sourcetypes you are interested in (A, B, C, D, E, F in my example), you could do something like this | timechart span=1d count as event_count by sourcetype usenull=f | foreach A ... See more...
If you know all the sourcetypes you are interested in (A, B, C, D, E, F in my example), you could do something like this | timechart span=1d count as event_count by sourcetype usenull=f | foreach A B C D E F [| eval <<FIELD>>=coalesce(<<FIELD>>,0) | eval <<FIELD>>=if(<<FIELD>>==0,"No events found",<<FIELD>>)]
The eval command is converting bytes into gigabytes.  Add another `/1024` to convert to terabytes. index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usa... See more...
The eval command is converting bytes into gigabytes.  Add another `/1024` to convert to terabytes. index=_internal sourcetype=splunkd source=*license_usage.log type=Usage idx=* | stats sum(b) as usage by idx | rename idx as index | eval usage=round(usage/1024/1024/1024/1024,2)  
Hi @vijreddy30 , see in the Splunk Validated Architectures document (https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf) what Splunk means for HA and how to implement it... See more...
Hi @vijreddy30 , see in the Splunk Validated Architectures document (https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf) what Splunk means for HA and how to implement it. For your requirements, it's really difficoult to answer to your question! Maybe there are some replication mechanisms, based on VM-Ware, to do this, but I'm not an expert on VM-Ware and this isn't the location for this question. Ciao. Giuseppe
Yes, the query works - however i want the values to be formatted differently within the search results. I would like the values to show in terabytes.  For example, using the query i get a value of 45... See more...
Yes, the query works - however i want the values to be formatted differently within the search results. I would like the values to show in terabytes.  For example, using the query i get a value of 4587.43 (in GB) for an index ingestion value. I would like this to round and show in Terabytes as 4.59
Hello, I figured it out. It was in the documentation all along. In the map settings you need to go to the Color and Style section, activate the Show base layer option, in the Base layer tile server ... See more...
Hello, I figured it out. It was in the documentation all along. In the map settings you need to go to the Color and Style section, activate the Show base layer option, in the Base layer tile server put the url "https://api.maptiler.com/maps/outdoor/{z}/{x}/{y}.png?key=YourAPIKeyHere"  and below that field select Raster. The URL above is in the Dashboard Studio Maps documentation.
So I have a data source which is very low volume and is not expected to have events at all (like only if there is an unexpected event, it logs that).  I have a requirement to produce a report showing... See more...
So I have a data source which is very low volume and is not expected to have events at all (like only if there is an unexpected event, it logs that).  I have a requirement to produce a report showing there were no unexpected events in last 90days. I tried following search query but it is not giving the results per day.   index=foo | timechart span=1d count as event_count by sourcetype | append [|stats count as event_count | eval text="no events found"]   PS - the count you are seeing below is for the other sourceytpe that is under the same index=foo, and the sourcetype where the count is 0 is displayed at the bottom ( sourcetype name is not displayed as there are no events for that sourcetype). I want my output to be specific to this sourcetype and display count = 0 for all the days where the data is not present.  
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, sin... See more...
The "Splunk Add-on for NetApp Data ONTAP" is showing on the site as Unsupported. Splunk Add-on for NetApp Data ONTAP | Splunkbase We are trying to find out if the app can be used with REST API, since OnTAP is eliminating its support for legacy ZAPI/ONTAPI Can anyone provide information as to the long-term prospects of this or another App which would collect data from Netapp OnTAP?
  Try something like this (this assumes that you want daily results based on when the get was received, rather than the put, if this is different, change the bin command to use the other field) ind... See more...
  Try something like this (this assumes that you want daily results based on when the get was received, rather than the put, if this is different, change the bin command to use the other field) index=myindex source=mysoruce earliest=-7d@d latest=@d | eval PPut=strptime(tomcatput, "%y%m%d %H:%M:%S") | eval PGet=strptime(tomcatget, "%y%m%d %H:%M:%S") | stats min(PGet) as PGet, max(PPut) as PPut, values(Priority) as Priority by TRN | eval tomcatGet2tomcatPut=round((PPut-PGet),0) | eval E2E_5min=if(tomcatGet2tomcatPut<=300,1,0) | eval E2E_20min=if(tomcatGet2tomcatPut>300 and tomcatGet2tomcatPut<=1200,1,0) | eval E2E_50min=if(tomcatGet2tomcatPut>1200 and tomcatGet2tomcatPut<=3000,1,0) | eval E2EGT50min=if(tomcatGet2tomcatPut>3000,1,0) | eval Total = E2E_5min + E2E_20min + E2E_50min + E2EGT50min | bin PGet as _time span=1d | stats sum(E2E_5min) as sum_5min sum(E2E_20min) as sum_20min sum(E2E_50min) as sum_50min sum(E2EGT50min) as sum_50GTmin sum(Total) as sum_total by _time Priority | eval good = if(Priority="High", sum_5min, if(Priority="Medium", sum_5min + sum_20min, if(Priority="Low", sum_5min+ sum_20min + sum_50min, null()))) | eval Per_cal=round(100*good/sum_total,1) | xyseries _time Priority Per_cal