Hi, I want to create a splunk table using multiple fields. Let me explain the scenario I have the following fields
Name Role (multiple roles will exist for each name) HTTPrequest (There are mul...
See more...
Hi, I want to create a splunk table using multiple fields. Let me explain the scenario I have the following fields
Name Role (multiple roles will exist for each name) HTTPrequest (There are multiple response as 2**,3**,4** and 5**)
My final output should be when the query is ran, It should the group the data in the below format for every day
Date
Name
Role
Success
Failed
Total
Failed %
01-Jan-23
Rambo
Team lead
100
0
100
0
01-Jan-23
Rambo
Manager
100
10
110
10
01-Jan-23
King
operator
2000
100
2100
5
02-Jan-23
King
Manager
100
0
100
0
03-Jan-23
cheesy
Manager
100
10
110
10
04-Jan-23
cheesy
Team lead
4000
600
4600
15
So, What I tried is index=ABCD | bucket _time span=1d | eval status=case(HTTPrequest < 400,"Success",HTTPrequest > 399,"Failed" ) | stats count by _time Name Role status This works something as below but I need the success and failure in to 2 seperate columns as I have shown above and also I need to add the failed % and total
Date
Name
Role
HTTPStatus
COUNT
01-Jan-23
Rambo
Team lead
Success
100
01-Jan-23
Rambo
Team lead
Failed
0
01-Jan-23
Rambo
Manager
Success
100
01-Jan-23
Rambo
Manager
Failed
10
01-Jan-23
King
operator
Success
2000
01-Jan-23
King
operator
Failed
200
02-Jan-23
King
Manager
Success
10
03-Jan-23
cheesy
Manager
Success
300
04-Jan-23
cheesy
Team lead
Success
400
I used the chart count over X by Y but this allows me to use only 2 fields and not more than 2
Please could you suggest me on how to get this sorted.
Hello,
I have a couple splunk columns that looks as follows:
server:incident:incident#:severity
severity
this object is then fed to another system which separates and generat...
See more...
Hello,
I have a couple splunk columns that looks as follows:
server:incident:incident#:severity
severity
this object is then fed to another system which separates and generates incidents.
Server: hostname
incident: category of incident
incident#: the incident number
sererity: Critical/Warning/Clear
Example:
serverA:zabbix:123456:Warning
Warning
serverA:zabbix:123456:Critical
Critical
The objective is that it generates uniqueness of the incident (if warning, then create a ticket, if Critical then call out)
All works well when with the separate of Critical and Warning alerts, however when one clear is generated, I need to generate two records to look as follows:
serverA:zabbix:123456:Warning
Clear
serverA:zabbix:123456:Critical
Clear
This way, the object that has been sent will get the clear.
Is there a way to achieve this?
Thanks
David
@bowesmana thanks for your inputs. source 2 events are not tied to the physical clock and a single day in application could span multiple days in calendar or multiple days in application can fit in...
See more...
@bowesmana thanks for your inputs. source 2 events are not tied to the physical clock and a single day in application could span multiple days in calendar or multiple days in application can fit in a single calendar day time frame. i'm exploring the option of populating these two sources separately in dashboard and try to pass the source 1 date/time as inputs to source 2 and get the events by each logical date.
No one here in the Community knows the answer to that and Splunk's policy is to not attach dates to future features. We'll know about it when it's released. Check https://ideas.splunk.com to see if...
See more...
No one here in the Community knows the answer to that and Splunk's policy is to not attach dates to future features. We'll know about it when it's released. Check https://ideas.splunk.com to see if others are asking the same thing and vote for it.
The prerequisites indicate that the Splunk DB Connect extension will not work with systems that are FIPS compliant. Will this change in future releases and is there a timeframe for this release?
You are right, the problem is in the addon linked to previous sourcetype. Thanks for your suggestions, I have all data I need to perform analysis. I'm going to do them.
I'm curious about why a sourcetype can no longer be used. Sourcetypes never expire. Perhaps it's an add-on that can't be used? The inputs.conf file to check is the one that references the file or ...
See more...
I'm curious about why a sourcetype can no longer be used. Sourcetypes never expire. Perhaps it's an add-on that can't be used? The inputs.conf file to check is the one that references the file or directory we're talking about. Use btool to find it. splunk btool --debug inputs list | grep "<<CSV file or directory name>>" Have you checked the logs? Have you tried the search I suggested? Have you tried looking in other indexes?
Hi @richgalloway, thanks for your answer. I can share with you some other bits. Previously, we used another sourcetype provided by a Splunk supported addon, which now can no longer be used after a...
See more...
Hi @richgalloway, thanks for your answer. I can share with you some other bits. Previously, we used another sourcetype provided by a Splunk supported addon, which now can no longer be used after a check with support. Even if with some problems, data was sent to cloud while using it, so the HF has the right permission to read pulled csv files. I tested the custom addon on a local test environment and here all data are correctly extracted, even timestamp. I thought about inputs.conf file, but not sure about which one I have to analyze: the one in SPLUNK_HOME/etc/system/local? The one on SPLUNK_HOME/etc/system/default? Others?
Searches are in the audit log. Saved searches will have a non-empty value in the savedsearch_name field. The user name is in the user field. index=_audit action=search
| table user savedsearch_nam...
See more...
Searches are in the audit log. Saved searches will have a non-empty value in the savedsearch_name field. The user name is in the user field. index=_audit action=search
| table user savedsearch_name search
You have the right steps, but perhaps something in the details is amiss. Verify the inputs.conf stanza points to the correct file/directory. Verify the file permissions allows reading by the HF. C...
See more...
You have the right steps, but perhaps something in the details is amiss. Verify the inputs.conf stanza points to the correct file/directory. Verify the file permissions allows reading by the HF. Check the splunkd.log files on the HF to see if any messages might explain why the file is not uploaded. Confirm the CSV file has timestamps for each event and that the timestamps are correctly extracted. Timestamps that are in the future or too far in the past will not be found by Splunk. Try searching a wide time range to see if the data has bad timestamps index=web earliest=0 latest=+10y
Untested, but try the chart command. | eval weeknum=strftime(strptime(yourdatefield,"%d-%m-%Y"),"%V")
| chart dc(Task_num) as Tasks over weeknum by STATUS
This depends in your use case and your environment. If you have Splunk Cloud in use then you can try to use Splunk Edge Processor. That is probably the easiest way to do it? Without Splunk Cloud you ...
See more...
This depends in your use case and your environment. If you have Splunk Cloud in use then you can try to use Splunk Edge Processor. That is probably the easiest way to do it? Without Splunk Cloud you can try ingest even or "old way" with props.conf and transforms.conf. More about this: Field extraction configuration https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Configureindex-timefieldextraction Are you absolutely sure that you want extract those fields on index time not on search time?
Hi thanks not soo petty, but good enough as a workaround Do you know how can I add "%" to each value? current query: | stats sum(CountEvents) by CT | rename "sum(CountEvents)" as "CountEvents"...
See more...
Hi thanks not soo petty, but good enough as a workaround Do you know how can I add "%" to each value? current query: | stats sum(CountEvents) by CT | rename "sum(CountEvents)" as "CountEvents" | eventstats sum(CountEvents) as Total | eval percentages%=round(CountEvents*100/Total,2) | fields - Total
Hi Everyone, I want to plot a chart according to the calendar week. I plotted a timechart like this, |timechart span=7d distinct_count(Task_num) as Tasks by STATUS But this doesn't give the ...
See more...
Hi Everyone, I want to plot a chart according to the calendar week. I plotted a timechart like this, |timechart span=7d distinct_count(Task_num) as Tasks by STATUS But this doesn't give the exact calendar weeks. Also i am keeping this charts data to last 3 months. Anyone have idea how to plot a bar chart based on calendar week? instead of date i want to see the data for current calendar weeks of last 3 months. I got from the splunk community on how to get the Calendar week. But i am not to plot a graph out of it. | eval weeknum=strftime(strptime(yourdatefield,"%d-%m-%Y"),"%V")
Hello I'm trying to create a timechart which will compare between two date\time range I want to see the values of last sunday (10.9) between 15:00-16:30 and compare with the values for the same tim...
See more...
Hello I'm trying to create a timechart which will compare between two date\time range I want to see the values of last sunday (10.9) between 15:00-16:30 and compare with the values for the same time but sunday last week (3.9)
How can I do it ?
Thanks
Hi Team, I am looking for the help to created search query for my daily run report which is running 3 time in a day. we are putting the files in directory which we are monitoring in splunk. is ...
See more...
Hi Team, I am looking for the help to created search query for my daily run report which is running 3 time in a day. we are putting the files in directory which we are monitoring in splunk. is there any way we can grab events from only latest sourcefile? For example: Index=abc sourcetype=xyz source=/opt/app/file1_09092023.csv source=/opt/app/file2_09102023.csv source=/opt/app/file3_09112023.csv..... new file can be placed time to time. I wanted report can be show only events from latest file, is it possible? Thank you