All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. A... See more...
I deployed splunk universal forwarder 9.1.1 on Linux servers which are running on VPC VSI in IBM Cloud. Some servers are RHEL7 others are RHEL8. These servers send logs to Heavy Forwarder server. After deployment, the memory usage was coming to high on each server and one of the server went down because of memory leak. CPU usage is also high as expected when the splunk process is running. For example, one of the server's CPU usage increased 30% and consumed 5.7GB memory out of 14GB after the splunk process up. How can I reduce the resource usage?
  thank you. With huge dashboard looks like I am hitting maximum concurrent searches Splunk allows was try to see if I could combine. would append [search…] would be started as new concurrent se... See more...
  thank you. With huge dashboard looks like I am hitting maximum concurrent searches Splunk allows was try to see if I could combine. would append [search…] would be started as new concurrent search?
Will this change the timezone in the output to SGT?  We want the output to be shifted to SGT and then formatted to "%Y-%m-%d %H:%M:%S" 
Hi @Satyapv , as @yuanliu said, I don't understand why to put disomogeneous results in te same search. Anyway, you could use the append command, but you'll have empty values in the columns of the o... See more...
Hi @Satyapv , as @yuanliu said, I don't understand why to put disomogeneous results in te same search. Anyway, you could use the append command, but you'll have empty values in the columns of the other search: index=IndexA | stats Count(X) AS X Avg(Y) AS Y BY XYZ | append [ search index=IndexB | stats Count(K) AS K Max(M) AS M by KM ] Ciao. Giuseppe
Hi @mlevsh, the easiest way is asking to remove that rule because it isn't useful! Anyway, you should list all the existing indexes in the WHERE condition: | tstats count where index IN (index1,in... See more...
Hi @mlevsh, the easiest way is asking to remove that rule because it isn't useful! Anyway, you should list all the existing indexes in the WHERE condition: | tstats count where index IN (index1,index2,index2) by index host | fields - count to avoid to repeat this list in every command, you could also put all these indexes in a macro or an eventtype and use it in your searches. Ciao. Giuseppe
Hi @Mien, if the days in which you're receiving less data aren't the weekend, you should analyze if in that days there are some scheduled activities or a downtime of that systems. In addition, you ... See more...
Hi @Mien, if the days in which you're receiving less data aren't the weekend, you should analyze if in that days there are some scheduled activities or a downtime of that systems. In addition, you should analyze if this behaviour is all weeks or only in one. then compare /opt/splunk/var/log/splunk/metrics.log file dimensions to understand if the issue is on Splunk or on the system. Ciao. Giuseppe
Sorry, but SGT+8 corresponds to UTC. If you want to chenge the time format from the displayed to  "%Y-%m-%d %H:%M:%S" you should use eval with the time functions: | eval Time=strftime(_time,"%Y-%m... See more...
Sorry, but SGT+8 corresponds to UTC. If you want to chenge the time format from the displayed to  "%Y-%m-%d %H:%M:%S" you should use eval with the time functions: | eval Time=strftime(_time,"%Y-%m-%d %H:%M:%S") Ciao. Giuseppe
I tried implementing slack app but unable to send alerts to splunk so can you guide me through how to use the app to send alerts without using webhook.
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in convertin... See more...
Hi Team, I'm currently receiving AWS CloudWatch logs in Splunk using the add-on. I'm developing a use case and need to utilize the "event Time" field from the logs. I require assistance in converting the event Time from UTC to SGT. Sample event Time is in UTC +0   2023-06-30T17:17:52Z 2023-06-30T21:29:53Z 2023-06-30T22:32:53Z 2023-07-01T00:38:53Z 2023-07-01T04:50:52Z 2023-07-01T05:53:55Z 2023-07-01T06:56:54Z 2023-07-01T07:59:52Z 2023-07-01T09:02:56Z 2023-07-01T10:05:54Z 2023-07-01T11:08:53Z 2023-07-01T12:11:53Z   End result:  UTC + 0 to SGT + 8 time. Expected output format is "%Y-%m-%d %H:%M:%S"   
Yes I came across that thing. So is there a alternative way to run python scripts?
Forget Splunk.  If there are no common fields between indices, can you illustrate what the stats result would look like?  Please show some sample tables of field values in each index (in text, anonym... See more...
Forget Splunk.  If there are no common fields between indices, can you illustrate what the stats result would look like?  Please show some sample tables of field values in each index (in text, anonymize as needed).  Then, illustrate the corresponding output table (also in text) that you envision with the two data data tables.  If anonymizing data is difficult, illustrate mock data tables and calculate desired output table by hand, so volunteers can understand your use case. Let me also point out that your illustrated mock code, "Stats Count (X) Avg(Y) by XYZ", is confusing because you mentioned no field named XYZ.  The other mock code, "stats Count (K) Max(M) by K M", also doesn't make sense because when you group by M, Max(M) can only have the value of that group M, unless K and M do not appear in the same event, in which case Max(M) is null.
If any of the codes, including your initial code, give output that doesn't suit the needs, please post sample data (anonymize as needed) that lead to such output, actual output (anonymize as needed) ... See more...
If any of the codes, including your initial code, give output that doesn't suit the needs, please post sample data (anonymize as needed) that lead to such output, actual output (anonymize as needed) from such code, and explain what the desired output should look like. (And how the desired output is different from actual output if that is not painfully obvious.) Your initial code performs transaction on user.  After excluding closed transactions, what remain in the stream are events with eventcode 4769 that do not have those three eventcodes for the same user, as well as events with eventcodes that are not those three.  Isn't this what you ask for?
Yep, thats a valid and nice SPL(the eval(score>0)).  or the "!=" also should do the trick...  | stats count(eval(score!=0)) as Total_Non_Zero_Vuln by ip the stats, eval commands give us so m... See more...
Yep, thats a valid and nice SPL(the eval(score>0)).  or the "!=" also should do the trick...  | stats count(eval(score!=0)) as Total_Non_Zero_Vuln by ip the stats, eval commands give us so many options, very nice!  
I tried what you suggested, but I was unable to get the results I expected. To resolve the issue, I had to disable Java log enrichment feature in Dynatrace OneAgent to stop OneAgent from injecting   ... See more...
I tried what you suggested, but I was unable to get the results I expected. To resolve the issue, I had to disable Java log enrichment feature in Dynatrace OneAgent to stop OneAgent from injecting   {dt.trace_id=837045e132ad49311fde0e1ac6a6c18b, dt.span_id=169aa205dab448fc, dt.trace_sampled=true}  into my logs. Now things are back to normal.
I think I just figured it out This search worked when I tried it...  Please suggest..... Thanks | stats count(Vulnerability) as Total_Vuln, count(eval(Score>0)) as Total_Non_Zero_Vuln by ip
Hi @ssuluguri  As per the doc - https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers the UF's should be able to upgrade directly from ... See more...
Hi @ssuluguri  As per the doc - https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers the UF's should be able to upgrade directly from 6x to 9x.    may we know the Splunk indexer version pls, thanks.      EDIT - Pls check this doc: https://docs.splunk.com/Documentation/Splunk/9.0.0/Installation/AboutupgradingREADTHISFIRST it is suggesting.... "Upgrading a universal forwarder directly to version 9.0 is supported from versions 8.1.x and higher." So, better you upgrade from 6x to 8.1x and then to 9.0.x, thanks. 
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ... See more...
How to count total row number of non-zero field? Thank you in advance Below is the data set: ip Vulnerability Score ip1 Vuln1 0 ip1 Vuln2 3 ip1 Vuln3 4 ip2 Vuln4 0 ip2 Vuln5 0 ip2 Vuln6 7 | stats count(Vulnerability) as Total_Vuln, countNonZero(Score) as Total_Non_Zero_Vuln by ip Is there a function similar to countNonZero(Score) to count row number of non-zero field in Splunk? With my search above, I would like to have the following output: ip Total_Vuln Total_Non_Zero_Vuln ip1 3 2 ip2 3 1
Hi @taufiqueshaikh .. From the doc - https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses The Splunk Enterprise Trial license When you download and install Splunk Enterpr... See more...
Hi @taufiqueshaikh .. From the doc - https://docs.splunk.com/Documentation/Splunk/9.1.1/Admin/TypesofSplunklicenses The Splunk Enterprise Trial license When you download and install Splunk Enterprise, a Splunk Enterprise Trial license is automatically generated for that instance. The following important points apply to the Enterprise Trial license: The Enterprise Trial license gives access to all Splunk Enterprise features. The Enterprise Trial license is for standalone, single-instance installations of Splunk Enterprise only. The Enterprise Trial license cannot be stacked with other licenses. The Enterprise Trial license expires 60 days after you install the Splunk Enterprise instance, unless otherwise specified to customers. The Enterprise Trial license allows you to index 500 MB of data per day to Splunk Enterprise. If you exceed that limit you receive a license warning. The Enterprise Trial license prevents searching if there are a set number of license warnings.
Hi @gcusello  For example, in a week, (average EPS). 18th Oct and 19th Oct got less than the actual. Meanwhile, on 15 Oct, 16 Oct, 17th Oct, 20th Oct and 21st Oct data looks normal.  The data sourc... See more...
Hi @gcusello  For example, in a week, (average EPS). 18th Oct and 19th Oct got less than the actual. Meanwhile, on 15 Oct, 16 Oct, 17th Oct, 20th Oct and 21st Oct data looks normal.  The data source, /opt/splunk/var/log/splunk/metrics.log  
You will need to fix 3 files in it: wordcloud_app/appserver/static/visualizations/wordcloud/visualization.js wordcloud_app/appserver/static/visualizations/wordcloud/webpack.config.js wordcloud_a... See more...
You will need to fix 3 files in it: wordcloud_app/appserver/static/visualizations/wordcloud/visualization.js wordcloud_app/appserver/static/visualizations/wordcloud/webpack.config.js wordcloud_app/appserver/static/visualizations/wordcloud/src/wordcloud.js replace "vizapi" with "api" Then you will want to restart and you may need to bump the version (http://splunkhostname:8000/en-US/_bump) as it is javascript and could be cached.