New to Splunk. The addons for VMware and other virtual products seem to be components which were once collective packages. There are 30+ options which perform seperate functions. Which are necessa...
See more...
New to Splunk. The addons for VMware and other virtual products seem to be components which were once collective packages. There are 30+ options which perform seperate functions. Which are necessary to have a general overview of my environment and which ones are still receiving support? I'm just overwhelmed by the amount of options and I'm not sure which ones are applicable.
Hi,
If I want to show the percentage, then I use
<option name="charting.chart.showPercent">true</option>
but if I want to display the absolute value in the pie chart, I tried the following, i...
See more...
Hi,
If I want to show the percentage, then I use
<option name="charting.chart.showPercent">true</option>
but if I want to display the absolute value in the pie chart, I tried the following, it does not work.
<option name="charting.chart.showValue">true</option>
Thanks for help.
Hi I have this SPL query but getting this error?
Error in 'rename' command: Usage: rename [old_name AS/TO/-> new_name]+.
Any ideas why or how to resolve this please?
| tstats count where index=...
See more...
Hi I have this SPL query but getting this error?
Error in 'rename' command: Usage: rename [old_name AS/TO/-> new_name]+.
Any ideas why or how to resolve this please?
| tstats count where index=os earliest=-7d latest=-3h by host, _time span=3h | stats median(count) as median by host | join host [| tstats count where index=os earliest=-3h by host] | eval percentage_diff=((count/median)*100)-100 | where percentage_diff<-5 OR percentage_diff>5 | sort percentage_diff | rename median as “Median Event Count Past Week”, count as “Event Count of Events Past 3 Hours”, percentage_diff as “Percentage Difference”
I am using a query and getting the logs but getting "**Setting up error code and description**" as the error message string for all the errors, need to extract those error which have error as "error ...
See more...
I am using a query and getting the logs but getting "**Setting up error code and description**" as the error message string for all the errors, need to extract those error which have error as "error in calling tarik services" but it is not extracting, need help i dont know how to use rex.....please help me
index=dep_ago Appid=APP-0431 prod "error"
this command i am using but not getting this "error in calling tarik services" error or any other string only this coming **Setting up error code and description**" with all the details in logs
please help.................
Hi,
I have created an advance threat protection incidents Correlation Search which is generating notable events how I can make it to reduce the notables which it is generating.
Thanks
I updated an alert description using the REST API (port 8089). When I use the API to list the description it shows the updated description. When I look at the alert using the web page (port 8000) i...
See more...
I updated an alert description using the REST API (port 8089). When I use the API to list the description it shows the updated description. When I look at the alert using the web page (port 8000) it still has the old version. There are multiple instances of Splunk and a load balancer, but I do not know the specifics. I always use the same IP address to access Splunk.
For the API access I use a token under my username. Is my token the problem? My user has enough rights to create and change alerts. Although when I list all alerts using |rest/servicesNS/-/-/saved/searches I get a warning Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability
Thanks.
I have a problem where not all values are showing up in a chart - and the values that do show up are rather flatlined. For example, here is the data I gathered for this chart:
However, non...
See more...
I have a problem where not all values are showing up in a chart - and the values that do show up are rather flatlined. For example, here is the data I gathered for this chart:
However, none of the earlier values show up in the chart.
I have remade the index and the data is good coming in from the CSV files.
Can anyone help me identify what's wrong?
Many thanks.
We had some feeds with host="unassigned". the following tstats will not return any result for some feeds, but it works for some other feeds:
tstats count where index=aindex by host,sourcetype,index
Hello. I'm fairly new to Splunk and SPL so bear with me here.
I have the following scenario:
I have an existing lookup file that was created by a search and is then updated daily by a similar sav...
See more...
Hello. I'm fairly new to Splunk and SPL so bear with me here.
I have the following scenario:
I have an existing lookup file that was created by a search and is then updated daily by a similar saved search.
So to sum it up, run a search, append contents of the lookup file, remove old events, and finally output the data to the lookup file again overwriting the old contents of the lookup file. If the search with the appended lookup file data and after clean-up results in zero events I still want the lookup file to remain.
Now, when reading the Splunk docs I get a bit confused regarding create_empty and override_if_empty optional arguments.
For create_empty, Splunk docs state "If set to true and there are no results, a zero-length file is created." So since outputlookup normally overwrites the file if it already exists is this the case even when writing no results?
Same question for override_if_empty, which seems to be doing something similar. If override_if_empty is set to false, does outputlookup overwrite the lookup file with a zero length list when the search has no results?
My saved search to update the lookup file looks approximately like this:
| "get external data" | fields blah blah blah | fields - _* | rename blah blah blah | eval time=now() | inputlookup "my existing lookup file" append=true | sort 0 - time | where time > relative_time(now(), "-7d@d") OR isnull(time) | outputlookup "my existing lookup file"
So do I need to add create_empty=true and override_if_empty=false? Or do I just need one of them, and if so which one?
Grateful for any clarification on this matter. Thanks in advance.
I am looking integration for Appdynamics with BMC event manager. My appdynamics controller is hosted on SaaS. How i can post event data to BMC event manager tool. Please share if anyone has done the...
See more...
I am looking integration for Appdynamics with BMC event manager. My appdynamics controller is hosted on SaaS. How i can post event data to BMC event manager tool. Please share if anyone has done the same integration on Saas environvent.
could someone please let me know where I'm going wrong in my query ? | spath service_roles{} output=service_role | stats count by cluster_name date service_role | spath input=service_role servic...
See more...
could someone please let me know where I'm going wrong in my query ? | spath service_roles{} output=service_role | stats count by cluster_name date service_role | spath input=service_role service output=service_name | spath input=service_role role_status{} output=status | rex max_match=0 field=_raw "hostname: <(?<hostname>.*)> type: <(?<type>.*)>" | eval status=mvzip(hostname,type) | mvexpand status | rex field=status "(?<hostname>[^~]+)~(?<type>[^~]+)" | dedup cluster_name, service_name | table cluster_name, service_name, hostname, type
Hi Team, I am looking for the help for the Event logs report if threshold match. I tried both way with creating a report and alert. but it either send me logs using |table _time, _raw method o...
See more...
Hi Team, I am looking for the help for the Event logs report if threshold match. I tried both way with creating a report and alert. but it either send me logs using |table _time, _raw method or sending count using |stats count | where count >0 I need to schedule last 24hrs data report like, only if there is a event at 00:00 AM. Please guide me Thank you
Hello everyone and thanks in advance.
I'm trying to make a search for file deletion but it isn't working.
Do you have any example of a use case? I tested using sysmon but when I delete a file I c...
See more...
Hello everyone and thanks in advance.
I'm trying to make a search for file deletion but it isn't working.
Do you have any example of a use case? I tested using sysmon but when I delete a file I can't see event 23.
Hi,
I need to create an index called "assets" from a JSON data file that I have. However, wen I try and create such an index and navigate to the given data file, I receive the following error:
...
See more...
Hi,
I need to create an index called "assets" from a JSON data file that I have. However, wen I try and create such an index and navigate to the given data file, I receive the following error:
The index in question does not currently exist on my Splunk instance and I am trying to create a new index and populate this index with this data. Can you please help?
Thanks.
Hi all,
We have an application which produces logfiles where other logfiles are inserted (they are pulled from stdout when the other program is executed). We are only interested in the stdout that ...
See more...
Hi all,
We have an application which produces logfiles where other logfiles are inserted (they are pulled from stdout when the other program is executed). We are only interested in the stdout that is generated by SQL statements of another program, which are multiline entries themselves in a specific format. So basically an SQL event starts with a date and ends with the next date of an SQL event. We have a RegEx which captures all the SQL lines we are interessted in, but we cannot see a way to ignore the rest that is contained in the logfile, since all routing to nullQueue or SEDCMD takes place after timestamp recognition and event breaking and those other entries are either messing up the event breaking or are attached to the SQL events if we specify a timeconfig which only matches the SQL statements.
Basically what needs to be done is that all lines not matching ^(\d+|\t+|\s\s+|CREATE|SELECT|DROP|UPDATE|INSERT|FROM|TBLPROPERTIES|\)).* need to be excluded before any timestamp recognition or eventbreaking is applied.
To make it clear again. The problem is that all events, also those we want to get rid of are multiline events with different start and end and the date for the eventtypes are specified in different locations and format, hence the exclusion must occur before merging takes place.
Is this possible?
Regards
OK I think I know what it is Splunk Search Runtime, but I have not ever thought what values or insights can this feature give. Today I decided to check my Splunk Cloud health and search usage statist...
See more...
OK I think I know what it is Splunk Search Runtime, but I have not ever thought what values or insights can this feature give. Today I decided to check my Splunk Cloud health and search usage statistics (just for curious) and noticed that of some searches "search runtime" is kinda very long like for 15 minutes and more, but if I run those searches it usually takes me several seconds. So why it is showing in statistics that it was running for 15 min and more? Can someone explain? Thanks.
Hi all,
We are creating episodes and incidents are getting created in SNOW , the incident number is available in Activity tab of the episode review but not in the Impact tab. could you please help ...
See more...
Hi all,
We are creating episodes and incidents are getting created in SNOW , the incident number is available in Activity tab of the episode review but not in the Impact tab. could you please help us how to resolve this issue?
Thanks,
Nivetha S
i need to extract one field whichis not having as field value pair and i have to distinguish the logs based on that particular field. Here is the example log: {"log":"[10:30:04.075] [INFO ] [] [c...
See more...
i need to extract one field whichis not having as field value pair and i have to distinguish the logs based on that particular field. Here is the example log: {"log":"[10:30:04.075] [INFO ] [] [c.c.n.b.i.DefaultBusinessEventService] [akka://MmsAuCluster/system/sharding/notificationAuthBpmn/4/nmT9K3rySjyoHHzxO9jHnQ_4/nmT9K3rySjyoHHzxO9jHnQ] - method=prepare; triggerName=approvalStart, entity={'id'='0f86c9007ff511ed82ffd13c4d1f79a9a07ff511ed82ffd13c4d173b0a','eventCode'='approval','paymentSystemId'='MMS','servicingAgentBIC'='null','messageIdentification'='0f86ff511ed82ffd13c4d173b0a','businessDomainName'='Mandate','catalogCode'='AN','functionCode'='APAL_INTERACTION'} Above log is the example here i have extracted other fields in log which has field value pairs like triggername,eventcode and all. But i need to filter log for "c.c.n.b.i.DefaultBusinessEventService" and info logs. Can anyone help me out ..how to filter logs based on above information. thanks in advance