All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have CSV like this- PPAGE_ID1 PPAGE_ID2 PPAGE_ID3 PPAGE_ID4 PPAGE_ID5 PPAGE_ID6 1-Jan 123 123 123 123 123 123 2-Jan 456 456 456 456 456 456 3-Jan 789 789 789 789 789 7... See more...
I have CSV like this- PPAGE_ID1 PPAGE_ID2 PPAGE_ID3 PPAGE_ID4 PPAGE_ID5 PPAGE_ID6 1-Jan 123 123 123 123 123 123 2-Jan 456 456 456 456 456 456 3-Jan 789 789 789 789 789 789 4-Jan 98 98 98 98 98 98 5-Jan 87587 87587 87587 87587 87587 87587 how can I take average by PPAGE_ID6 or PPAGE_ID100 ? Please help.
Hello, I have installed Service Now Add on App, my service now administrator has followed all the steps needed from the Service now side. Using the alert action with ServiceNow incident integrat... See more...
Hello, I have installed Service Now Add on App, my service now administrator has followed all the steps needed from the Service now side. Using the alert action with ServiceNow incident integration works fine and creates incidents in service now. However, we have limited fields that we can define in the ServiceNow alert action like we cannot define field IMPACT and Servicenow auto assigns the impact. So I wanted to use a custom generating command that gives me flexibility to generate the SeviceNow incident with additional fields as parameters. Here is my search (My alert condition if servers exceed > 90% cpu) raise ServiceNow incident index=os host=* sourcetype=cpu cpu=all NOT( [| inputlookup servers.csv | where status="decom" OR status="complete blacklist" OR status="DC Outage" | rename target as host | table host]) | eval PercentCPULoad = 100 - pctIdle | stats min(PercentCPULoad) as PercentCPULoad by host | eval hostname=upper(mvindex(split(host,"."),0)) | where PercentCPULoad >= 90 | eval timestamp=strftime(now(),"%Y-%m-%d %H:%M:%S") | eval Impact = 1 | snowincident --account "ServiceNow Dev" --category "Hardware" --correlation_id timestamp.":".hostname --impact 1 --state 1 --contact_type "Email" --short_description "Nishad - Splunk Created - CPU utilization is".PercentCPULoad." on ".hostname." Threshold - 90 <= ".PercentCPULoad." <=100" --assignment_group "Tools Testing Group" ci_identifier=hostname However, this doesn't work and I get below error message. *Error in 'snowincident' command: This command must be the first command of a search. * As per Splunk documentation, there certain steps that we need to carry on the ServiceNow server to integrate with Splunk, my SNOW administrator confirmed that he has followed all the steps as per the below documentation. https://docs.splunk.com/Documentation/AddOns/released/ServiceNow/ConfigureServiceNowtointegratewithSplunkEnterprise Can you please suggest what is missing, for searching I am using the SNOW_TA app the command 'snowincident' is not detected.
After upgraded Splunk app for AWS from version 5.02 to a later version 5.x, SH is not started with the following error: Problem parsing indexes.conf: Cannot load IndexConfig: stanza=aws_vpc_flow_l... See more...
After upgraded Splunk app for AWS from version 5.02 to a later version 5.x, SH is not started with the following error: Problem parsing indexes.conf: Cannot load IndexConfig: stanza=aws_vpc_flow_logs Required parameter=homePath not configured Validating databases (splunkd validatedb) failed with code '1'. Anyone with the same issue and how you get it resolved? Thanks,
Does anyone know if its possible as part of a workflow action that an event can be tagged? I would love to be able to add a tag to specific events indicating the event was acknowledged after runnin... See more...
Does anyone know if its possible as part of a workflow action that an event can be tagged? I would love to be able to add a tag to specific events indicating the event was acknowledged after running a specific action on the event (sending event info to 3rd party app) Thanks!
Heavy Forwarder is RHEL 7.7 Splunk binaries are 7.2.9.1 TA is version 3.5.8 (3.6.8) does the same. We're getting the data and when one looks at the events they have proper unix timestamps in ... See more...
Heavy Forwarder is RHEL 7.7 Splunk binaries are 7.2.9.1 TA is version 3.5.8 (3.6.8) does the same. We're getting the data and when one looks at the events they have proper unix timestamps in them but, when they are indexed they all get a time of midnight. We tried this a few days ago on old VM (RHEL 6 & Splunk 6.6.12.1 ) that just couldn't keep up with the volume but, it did seem to timestamp properly.... Moving it to the new VM is when we found the timestamp issue. How could I correct the timestamps in splunk?
Hi Wanted to subract the subquery results from main query. i.e index=main source=/folder/abc.csv |table customername - [index=main source=/folder/xxx.csv |table name ] can this be achieva... See more...
Hi Wanted to subract the subquery results from main query. i.e index=main source=/folder/abc.csv |table customername - [index=main source=/folder/xxx.csv |table name ] can this be achievable ? i want to get only the names which are not common from both the files. Thanks
how to find Top 10 processes per hour i need to Capture CPU, RAM, and Process threads
I have a log that I am trying to parse and I am unable to figure this out. It looks like a type of XML file. Here is an example: <ErrorMessage Id='20200130111127151' Date='1/30/2020' Time='11:11 ... See more...
I have a log that I am trying to parse and I am unable to figure this out. It looks like a type of XML file. Here is an example: <ErrorMessage Id='20200130111127151' Date='1/30/2020' Time='11:11 AM' > <RequestInformation Hostname='1.2.3.4' HostAddress='5.6.7.8' HostBrowser='Mozilla/4.0 (compatible; MSIE 6.0; MS Web Services Client Protocol 4.0.30319.42000)' ReferringPage='' RequestType='POST' ContentLength='505' RawUrl='/dir/subdir/filename.asmx'> <Browser Type='IE6' Browser='IE' Version='6.0' Platform='Unknown' SupportsFrames='True' SupportsJavascript='True' SupportsTables='True'SupportsCookies='True'/> <Cookies> </Cookies> <Form> </Form> </RequestInformation> <Exception Message='ORA-01017: invalid username/password; logon denied'> <StackTrace> <![CDATA[ at Oracle.DataAccess.Client.OracleException.HandleErrorHelper(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, OpoSqlValCtx* pOpoSqlValCtx, Object src, String procedure, Boolean bCheck, Int32 isRecoverable, OracleLogicalTransaction m_OracleLogicalTransaction) at Oracle.DataAccess.Client.OracleException.HandleError(Int32 errCode, OracleConnection conn, IntPtr opsErrCtx, Object src, OracleLogicalTransaction m_oracleLogicalTransaction) at Oracle.DataAccess.Client.OracleConnection.Open() at dhss.webservice.login_ws.MExecuteComponent.AuthenticateToAPP(String UserID, String Password, String DBInstance, String ServerIP, String ServerPort) ]]> </StackTrace> </Exception> </ErrorMessage> I have the Add-on for Oracle database installed, but it don't seem to work with this one.
How can I properly extract just the client that is doing the query from the below log entries. I noticed that on some log entries the word client is followed by @xxxxx characters and for some it does... See more...
How can I properly extract just the client that is doing the query from the below log entries. I noticed that on some log entries the word client is followed by @xxxxx characters and for some it doesn't. Splunk field extractions had the below extraction, but it adds the word client to some of the IP's. Any help is appreciated. Thanks. ^(?:[^ \n]* ){5}(?P[^#]+) 2020-01-30T12:50:39-05:00 173.12.5.49 named[15584]: client @0x7f74cc307f80 173.27.28.143#50046 (www.google.ru): query: www.google.ru IN A + (173.20.3.47) 2020-01-30T12:50:21-05:00 173.19.9.46named[15584]: 30-Jan-2020 12:50:21.069 client 173.24.28.149#50769: UDP: query: sync3.adsniper.ru IN A response: SERVFAIL +
We have cases in which there is no date in the log files, meaning, only the time of the event is in the data. What can we do in such cases?
Cron help to run an alert every 15th and 45th minute of the hour Tried this but didn't help 15-60/30 * * * *
Hi all, I have the stream Addon with almost 12 forwarders. Here i created a group under the stream distrubuted forwarderManagement, I gave the regex which matches to forwarders. But only few ... See more...
Hi all, I have the stream Addon with almost 12 forwarders. Here i created a group under the stream distrubuted forwarderManagement, I gave the regex which matches to forwarders. But only few servers are pointing to this new group, rest of them are pointing to default group. I see all error under status of the servers which are pointing to default group. restarted the splunk services, updated inputs.conf. nothing worked, please help me in this!! if anyone is aware. Thanks.
Hey All, I have a workflow action that passes a search string to an external app (ServiceNow) for incident creation. When I use the $_time$ token it uses epoch not the properly formatted time. I ... See more...
Hey All, I have a workflow action that passes a search string to an external app (ServiceNow) for incident creation. When I use the $_time$ token it uses epoch not the properly formatted time. I am unable to use an eval in the search string because the first command of the search string must be the ServiceNow parameter to call the script. Anyone have any suggestions how I could pass the properly formatted time? This is what I have currently: | snsecincident short_description "$sn_fe_hx_shortdesc$ $sn_fe_ips_shortdesc$ $sn_pa_threat_shortdesc$ $sn_ms_def_shortdesc$ on $sn_fe_hx_srchost$ $sn_fe_ips_dst$ $sn_ms_def_compname$ $sn_pa_threat_src$ at $Time$" category "Splunk Generated Incident" subcategory "Security Alert" cmdb_ci "$sn_fe_hx_srchost$ $sn_ms_def_compname$ $sn_fe_ips_shost$" description "BLAH BLAH" If I try this before the snsecincident: | eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N") | rename _time as Time I receive an error that the snsecincident has to be the first command in the string. Should I create a field alias for _time that is formatted properly and use that? If so how would I go about that? Thanks!
We have four indexer and replication factor is 2.replication port is on all indexer is 8080 and is enabled on all server. We observed that indexer 2 and indexer 4 has lost the connectivity and they... See more...
We have four indexer and replication factor is 2.replication port is on all indexer is 8080 and is enabled on all server. We observed that indexer 2 and indexer 4 has lost the connectivity and they were not able to ping each other but indexer 1 can ping indexer 4 and indexer 3 can ping indexer 4 vice versa.Not sure what is the exact issue. can some one suggest on this? Below is the complete error message "Search peer indexer4-xxxxx has the following message: Too many streaming errors to target=xx.2.70.xxx:8080. Not rolling hot buckets on further errors to this target. (This condition might exist with other targets too. Please check the logs)"
Hello, I need a short guidance on how to display my dashboards on the smartphone. The case is, that I have my Splunk instance inside of the corporate network and there is no way to open it to th... See more...
Hello, I need a short guidance on how to display my dashboards on the smartphone. The case is, that I have my Splunk instance inside of the corporate network and there is no way to open it to the outside world (internet). However, it is possible to establish the VPN connection from the mobile phone to the corp network. Then I would hope, when opening the Splunk mobile app (after the proper configuration of course) that it should be no problem that it displays my dashboards / alerts. Is my assumption correct? Could anyone confirm it? If yes, what components do I need to install on the search head in order to achieve my goal? Splunk Cloud Gateway? .... even if I do not really want to go outside of my corporate network? In the description of the Cloud Gateway App is written, that it is cloud based. What would it mean actually? I mean I have to avoid disposing my data outside the corp net at any cost, it is just not possible. Could anyone give me a hint what I actually need to view my dashboards on my mobile inside the corp net? Kind Regards, Kamil
Hello, I would like to use a dynamic filter. I have a dropdown($pool$) which select only one value from a list. I want to add a static value "all" that take all the values in the list. Code wor... See more...
Hello, I would like to use a dynamic filter. I have a dropdown($pool$) which select only one value from a list. I want to add a static value "all" that take all the values in the list. Code working at this moment : index source | lookup bundle_3dexp.csv bundleid OUTPUTNEW bundleCode | eval poolname=bundleCode+poolLetter | where (poolname="$pool$" AND date >= "$time$") | dedup login | table login How should i modify the code ? Adding an IF statement with the WHERE ? Thanks you
I need to change my user preferences in my account. In settings I cant find the option to edit the already assigned role.
Can I restore buckets from frozen to cold instead of thawed? A customer of ours has an index which had a frozentimeperiod of 35 days. We want to increase this to 90 days but we want all the data... See more...
Can I restore buckets from frozen to cold instead of thawed? A customer of ours has an index which had a frozentimeperiod of 35 days. We want to increase this to 90 days but we want all the data that is currently between 35 and 90 days old (and is in frozen now) to be restored to the colddb so the (new) frozentimeperiod settings will apply and the data is automatically removed (frozen again?) when it's older than 90 days. Can this be done easily?
Hi, We are about to start up a new project where the project manager need to know the carbon footprint of the work done by Splunk. How do I calculate that? We will use DB connect to get data... See more...
Hi, We are about to start up a new project where the project manager need to know the carbon footprint of the work done by Splunk. How do I calculate that? We will use DB connect to get data into Splunk (running once every hour) and also the same App to send some data to another tool (once every 24h). I guess that the daily impact will be about 50mb of data into Splunk, and about 10mb out from Splunk. Is there some app that I can use for this, or does Splunk have some guidelines how to calculate this? thanks Jonas
Hello Splunkers, I have a scenario in which I have generated two reports - 1) First one shows data for last 6 months, visualized as a line chart (Data is:- count of total movie releases and total... See more...
Hello Splunkers, I have a scenario in which I have generated two reports - 1) First one shows data for last 6 months, visualized as a line chart (Data is:- count of total movie releases and total hit movies ) 2) Second shows data of last for last 6 month, visualized as line chart (data is :- count of Hindi movies , English movies, Telugu movies) Is it possible to combine these two reports? If yes please guide me how this could be done.