All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I was thinking about using API things. It's like monitoring the posts from an official Twitter account. Is it possible to achieve or ....
Question -  If I wanted to prevent SAML/SSO configurations from replicating to other SHs in a cluster, could I use the 'conf_replication_blacklist.<name>' or something similar to exclude the authent... See more...
Question -  If I wanted to prevent SAML/SSO configurations from replicating to other SHs in a cluster, could I use the 'conf_replication_blacklist.<name>' or something similar to exclude the authentication.conf? Or would that cause more issues outside of just preventing SAML/SSO configs to be unsyncd? Context -  We are migrating from onprem servers to AWS servers. The current configurations for SSO/SAML only work for the onprem servers, and we will need new configs for the AWS servers. The configs are in the etc/system/local/authentication.conf, so already at highest precendence. However, while working on those configurations we don't want to break the working SSO for onprem. We don't want to make it a separate cluster, cause then we'd have to get all the searches/lookups replicated across some other way. I came across the 'conf_replication_summary.blacklist' and 'conf_replication_include.<conf_file_name> = <boolean>' in the server.conf spec, and was wondering if anyone has any experience using these for authentication.conf and if there are complications I should be aware of? Cause if we could use these to temporarily pause the replication with no real ill effects, that'd be great.
Hi forum, I have a 2 peer single site (sf2, rf2) index cluster. We recognized that the primaries for indexes are not distributed even by using the search:     | rest splunk_server=local /serv... See more...
Hi forum, I have a 2 peer single site (sf2, rf2) index cluster. We recognized that the primaries for indexes are not distributed even by using the search:     | rest splunk_server=local /services/cluster/master/buckets | rex field=title "^(?<repl_index>[^\~]+)" | search repl_index="*" standalone=0 frozen=* | rename title AS bucketID | fields bucketID peers.*.search_state peers.*.bucket_flags frozen repl_index | rename peers.3DAB62DE-6D21-4C93-B8E5-A65370709B79.bucket_flags as bucketflags | eval prim=if(bucketflags = "0x0","prim_yes","prim_no") | stats count by repl_index prim | xyseries repl_index prim count | fillnull prim_yes,prim_no | eval ratio=prim_yes/(prim_yes+prim_no) | eval ratio=round(ratio*100,2) | search repl_index="*"       More or less all primaries are either on one indexer or the other, resulting in uneven load as we have a search hotspot on one index. We were able to have a far better distribution after we set sf=1, removed excess buckets and set sf=2 again. Unfortunatly after stop an indexer for a while or do a rolling restart it's again very uneven distributed (as seen on the first screenshot). it's also possible to get an even distribution when stopping clustermaster and peers at the same time and starting again - in this time we have data loss. restarting any component for it's own doesn't fix the issue. we tried to rebalance primaries using:     curl -k -u admin:plaseentercreditcardnumber --request POST https://localhost:8089/services/cluster/master/control/control/rebalance_primaries     any hints how to fix this? We are using v8.0.7. best regards, Andreas
Hi everyone, Is it possible to achieve this: My search has resulted in four columns Column1       Column2          Column3         Column4 ------                 -------                   ------- ... See more...
Hi everyone, Is it possible to achieve this: My search has resulted in four columns Column1       Column2          Column3         Column4 ------                 -------                   -------                ------- Type1             Source1            OK(status)      Item1 Type2             Source2            OK(status)      Item2 Type3             Source3            BAD(status)   Item3 Type4             Source4            OK(status)      Item4 Type5             Source5            BAD(status)    Item5 Type6             Source6            BAD(status)    Item6 I wish to send an email periodically with this text: At this time, Items: Item1, Item2, Item4 are OK, and Item3, Item5, Item6 are BAD. Is it possible to filter Items based on Column3 and get all fields in a single line in order to put them in a message which will also be part of the resulting query? If it is not possible to make both cases - OK and BAD in the same line, it would be nice to have only one working.
Hi everyone I'm using Splunk Security Essentials and I have a problem with a macro : "get_identity4events(user)" the error in the search is : "Error in 'SearchParser': The search specifies a macro... See more...
Hi everyone I'm using Splunk Security Essentials and I have a problem with a macro : "get_identity4events(user)" the error in the search is : "Error in 'SearchParser': The search specifies a macro 'get_identity4events' that can not be found. Reasons include: the macro name is misspelled, you do not have" read "permission for the macro, or the macro has not been shared with this application, Click Settings, Advanced search, Search Macros to view macro information. " the macro is missed in the list of macro. i test to create it but i don't find the content of this macro on the community or docs splunk. coult you help me please? thanks  Maxime
Environment- Single Splunk 7.3.9 search head / indexer with FIPS_MODE=1 etc/system/local/server.conf       [sslConfig] sslRootCAPath = $SPLUNK_HOME\etc\auth\mycerts\consolidatedCA.pem [kvstore]... See more...
Environment- Single Splunk 7.3.9 search head / indexer with FIPS_MODE=1 etc/system/local/server.conf       [sslConfig] sslRootCAPath = $SPLUNK_HOME\etc\auth\mycerts\consolidatedCA.pem [kvstore] serverCert = mycerts\kvstore_consolidated.pem sslPassword = <password_for_private_key>       the "kvstore_consolidated.pem" contains my private key, and the server cert. Issue: kvstore fails to start. (log below from splunkd.log)     07-19-2021 11:06:35.763 -0400 ERROR KVStoreConfigurationProvider - Could not get ping from mongod. 07-19-2021 11:06:35.763 -0400 ERROR KVStoreConfigurationProvider - Could not start mongo instance. Initialization failed. 07-19-2021 11:06:35.763 -0400 ERROR KVStoreBulletinBoardManager - KV Store changed status to failed. Failed to start KV Store process. See mongod.log and splunkd.log for details.. 07-19-2021 11:06:35.763 -0400 ERROR KVStoreBulletinBoardManager - Failed to start KV Store process. See mongod.log and splunkd.log for details.     mongod.log      2021-07-15T14:36:03.080Z E NETWORK [conn941] SSL peer certificate validation failed: unsupported certificate purpose 2021-07-15T14:36:03.080Z I NETWORK [conn941] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: unsupported certificate purpose. Ending connection from 127.0.0.1:52128 (connection id: 941)     so it seems like the server is trying to make loopback requests and trying to act as both the server and the client in SSL comms. In reading this , (while its not the same issue), the suggestion is to have the CA sign the CSR so its both client and server.  Before I go down this road (the CA I am using does not seem to support this- it can only sign as  either "user" or "server"), just want to see if anyone else have ran into this?  I also tried the server.conf settings in this article, but with same results: https://splunkcommunity.com/wp-content/uploads/2019/11/FIPSConf_Final.pdf
Hi, I have uploaded a JSON data from one of my APM tools into Splunk to get some meaningful insights. The events are there for every hour. Every event has 2 fields timeFrameStart & timeFrameEnd which... See more...
Hi, I have uploaded a JSON data from one of my APM tools into Splunk to get some meaningful insights. The events are there for every hour. Every event has 2 fields timeFrameStart & timeFrameEnd which are coming as Epoch time and I have converted them into human readable format. There is another fields called Visits which tells me how many visits were there in that hour. My requirement is to plot a hourly usage graph with time as x-axis (probably derive from timeFrameStart field) and visits on the y-axis. I've written a base query -   source="mydataarchive" host="splunkdev" index="test_index" | eval startTime=strftime(timeFrameStart/1000,"%a,%d %b %Y %H:%M:%S") | eval endTime=strftime(timeFrameEnd/1000,"%a,%d %b %Y %H:%M:%S") | table startTime endTime visits   Let me know if anyone can advice on this using stats or timechart command.
I have 2 query searches, one returns set result A and the other one returns set result B.  I would like to get the results of A/B (results that appear in A but do not appear in B ) To be more speci... See more...
I have 2 query searches, one returns set result A and the other one returns set result B.  I would like to get the results of A/B (results that appear in A but do not appear in B ) To be more specific: First query is: index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "some log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id)     The  query returns a list of correlation_id, such as: values(correlation_id) 11 22 33   Second query is almost identical (different log message) : index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "Other log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id)       So the result is in the same structure,  for example  values(correlation_id) 11 88     I would like a query which results in A/B, so in this case it should be  values(correlation_id) 22 33   I tried this query but it doesn't work: index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "some log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id) | search NOT in [search index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "Other log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id)]      
I have a set of data that has a data field inside that shows when an asset has been assigned. Right now, we're pulling the total count of those assets, but have been asked to show an incrementing cou... See more...
I have a set of data that has a data field inside that shows when an asset has been assigned. Right now, we're pulling the total count of those assets, but have been asked to show an incrementing count over the course of a line chart. The data looks something like this: Asset ID Assigned Date 123 7/12/21 124 7/12/21 125 7/13/21 126 7/14/21   I want the data in the chart to show like this: 7/12/21: 2 7/13/21: 3 7/13/21: 4 Essentially, after each date from the start, the chart adds the previous dates, and charts the total.  Thanks in advance. 
Hi,  I am using the Threat Intelligence datamodel in my Splunk ES environment. It is being populated with a Threat Intel Feed source. I would now like to check if certain values from my searches exi... See more...
Hi,  I am using the Threat Intelligence datamodel in my Splunk ES environment. It is being populated with a Threat Intel Feed source. I would now like to check if certain values from my searches exist in the data model, so i can enrich correlation searches etc. I basically want to my searches to lookup the data model and output if the value exists or not, along with the matched value. For example,  i have a field named url  which will be returned from the following search: index="cisco_fmc" rec_type_desc="File Malware Event" eventtype=cisco_fmc_malware disposition=Malware I now want to add SPL to the above so it looks up the value of url  against the Threat Intel datamodel. The datamodel contains the standard two fields - threat_match_field which can be url, and threat_match_value which is the associated value. If present, I would like to add a new field to the output named match which should be set to "Yes" if present and "No" if not. Would also like to output the threat_match_value  itself to the output. Thanks.
I am wanted to calculate shift Analysts VPN session start and end time duration to exactly capture the shift during 24 hours as I have 3 shifts with following timings   Morning Shift time = 7am  to... See more...
I am wanted to calculate shift Analysts VPN session start and end time duration to exactly capture the shift during 24 hours as I have 3 shifts with following timings   Morning Shift time = 7am  to 3pm Evening Shift time = 3pm to 11pm    night shift time duration = 11pm to 7am next morning   Currently I constructed following query that is having wrong data whenever i increase time more than 24 hours how i can put if condition in this query to add a column Shift time (morning ,evening ,night ) based on Start and end time if condition time range ? index=it sourcetype=pulse:connectsecure vendor_product="Pulse Connect Secure" realm=Company-Domain+DUO1001 earliest=-24 | iplocation src | eval Attempts= if(vendor_action="started","Session_Started","Session_Ended") | stats values(Attempts) AS All_Attempts values(src) AS src count(eval(Attempts="Session_Started")) AS Started count(eval(Attempts="Session_Ended")) AS Ended min(_time) AS start_time max(_time) AS end_time by user | eval Duration=end_time-start_time | search user=Analyst1 OR user=Analyst2 OR user=Analyst3 OR user=Analyst4 OR user=Analyst5 OR user=Analyst6 OR user=Analyst7 OR user=Analyst8 OR user=Analyst9 | convert ctime(start_time) | convert ctime(end_time) | eval totall_duration=tostring(Duration,"duration") | table user,All_Attempts,src,Started,Ended,start_time,end_time,totall_duration In excel I am using following formula to calculate the shift duration from ticket close time =IF(HOUR(E2)<7,"Night Shift",IF(HOUR(E2)<15,"Morning Shift",IF(HOUR(E2)<23,"Evening Shift","Night Shift")))  How I can insert similar condition in splunk to get the result intron of  a new calculated column called shift with Session started and session End (time  duration between both times)?   @manjunathmeti  @woodcock   
I've the following data in my table part1.part2.answer.local part1-part2..part3.part4.answer.net part11.part11-part11.answerxyz.net part1-part2-part3-part4.answer.net part1-part2-part3-part6.ans... See more...
I've the following data in my table part1.part2.answer.local part1-part2..part3.part4.answer.net part11.part11-part11.answerxyz.net part1-part2-part3-part4.answer.net part1-part2-part3-part6.answer.com part127.09 abcd (+789) part127.08 abcd (+123) part127.06 abcd (+456) I want to split it as follows : 1.) If there is a space present in the data then it should be returned exactly as it is input-------------------------------------output part127.09 abcd (+789)-------- part127.09 abcd (+789) part127.08 abcd (+123)------- part127.08 abcd (+123) part127.06 abcd (+456) --------part127.06 abcd (+456) 2.) If there is no space then the part before the first dot should be returned input------------------------------------------------------ output part1.part2.answer.local ----------------------------------part1 part1-part2..part3.part4.answer.net------------------- part1-part2 part11.part11-part11.answerxyz.net ------------------part11 part1-part2-part3-part4.answer.net -------------------part1-part2-part3-part4 part1-part2-part3-part6.answer.com------------------part1-part2-part3-part6 I've tried this :       index=ind sourcetype=src | fields f1 | where f1 != "null" | dedup f1 | eval temp=f1 | eval derived_name_having_space=if(match(f1,"\s*[[:space:]]+"),1,0) | eval with_Space=f1 | where derived_name_having_space=1 | eval without_Space=mvindex(split(temp,"."),0) | where derived_name_having_space=0 | table with_Space without_Space f1       Here I'm not getting any rows returned. -------------------------------------------------------------------------- But , In the above query when I remove the part       | eval without_Space=mvindex(split(temp,"."),0) | where derived_name_having_space=0       I get the correct results of the rows where derived_name_having_space=1 . --------------------------------------------------------------------------- Similarly, when I remove the part       | eval with_Space=f1 | where derived_name_having_space=1       I don't get the correct results of the rows where derived_name_having_space=0 . input ---------------------------------------------output part127.09 abcd (+789)-------------------- -part127 part127.08 abcd (+123) -------------------- part127 part127.06 abcd (+456) -------------------- part127 Since they all evaluate to the same result it creates a problem while deduping. ------------------------------------------------------------------------------- I've used the regex class from here : https://www.debuggex.com/cheatsheet/regex/pcre Can anyone point me where I'm missing it or any other approach should be followed? Thanks
Hi, I have been trying to fetch Agent logs through AppDynamics controller itself.  I am not able to understand the use of " Logger name " in Request agent file.  I have tried to download multiple ... See more...
Hi, I have been trying to fetch Agent logs through AppDynamics controller itself.  I am not able to understand the use of " Logger name " in Request agent file.  I have tried to download multiple logging files with different logger name eg: com.appdynamics,  com.appdynamics.BusinessTransaction but the files which are getting downloaded have same size and same type of files( like BT's and Byte code). Could you please explain me the use of this tab ?  Regards, Ujjwal. 
I want to execute a query in app1, but I want to get the data from app2 For eg: Execute query in app1 "index="abc",  This should get the data from app2 Please help!
I am sending data to Splunk using HEC but after trying all the methods exposed by Splunk API , I am getting all the custom properties nested under a single "message" or "data" attribute. Is there a w... See more...
I am sending data to Splunk using HEC but after trying all the methods exposed by Splunk API , I am getting all the custom properties nested under a single "message" or "data" attribute. Is there a way so that all my properties are logged in original format and not under a single head. Actual : { ID: 123, message: src : "abcd", category: "list" , user: "tchsavy"   } Expected : { ID : 123 , message : "Hello" , src : "abcd", category: "list" , user: "tchsavy" } 
We are trying to develop Monitoring as Code application. So, to start with we want to export existing Splunk Configuration in .tf file format and then we can try to modify the .tf file to modify the ... See more...
We are trying to develop Monitoring as Code application. So, to start with we want to export existing Splunk Configuration in .tf file format and then we can try to modify the .tf file to modify the respective splunk configuration. I can see Splunk has provided terraform provider. Is there a way we can export existing Splunk configuration in .tf file format. https://registry.terraform.io/providers/splunk/splunk/latest   I am open for suggestion if there are some other better way to implemement Monitoring as Code solution around Splunk.
I receive a bunch of messages that all are assigned to a group by the groupID. I also have a dynamic set of a range as a Multivalue-Field, that needs to be used as a filter for these messages. I ... See more...
I receive a bunch of messages that all are assigned to a group by the groupID. I also have a dynamic set of a range as a Multivalue-Field, that needs to be used as a filter for these messages. I tried it like this so far, but couldn't get any results:     index=my_index sourcetype=my_source | eval range=case("case1", mvrange(1,9), "case2", mvrange(10,19),...) | where groupID in (range) | stats count(_raw) as count by groupdID       So if case1 happens, i only want to see the amount of Messages in the specified groupID-range, and so on.. Can anyone help me with that ?
@niketn  I am trying to display the selected start and end time in the UI. I followed particularly the below answer given by you.  https://community.splunk.com/t5/Dashboards-Visualizations/Setting-... See more...
@niketn  I am trying to display the selected start and end time in the UI. I followed particularly the below answer given by you.  https://community.splunk.com/t5/Dashboards-Visualizations/Setting-job-earliestTime-and-job-latestTime-tokens-for-the-date/m-p/345200/highlight/true#M22464 It was working fine but suddenly it stopped saying Invalid Date. We recently had a splunk Upgrade to Version:8.0.4.1. Could it be due to the upgrade? Was there any change. I couldnt narrow down to exact issue. Here is the code which you have shared   <form> <label>Show Time from Time Picker</label> <!-- Dummy search to pull selected time range earliest and latest date/time --> <search> <query>| makeresults | addinfo </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <done> <eval token="tokEarliestTime">strftime(strptime('$job.earliestTime$',"%Y/%m/%dT%H:%M:%S.%3N %p"),"%m/%d/%y %I:%M:%S.%3N %p")</eval> <eval token="tokLatestTime">strftime(strptime('$job.latestTime$',"%Y/%m/%dT%H:%M:%S.%3N %p"),"%m/%d/%y %I:%M:%S.%3N %p")</eval> </done> </search> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <!-- sample HTML Panel to display results in required format --> <html> ( $tokEarliestTime$ to $tokLatestTime$) </html> </panel> </row> </form>   Attaching the screenshot of what is shown in the UI. Could you please suggest. 
Hi. I have a strange behaviour from about 48h by an UF, a single one. 1) On UF both metrics and splunkd logs events, NO ERRORS! Connections to outputs is OK! 2) UF has not been touched in last 48h... See more...
Hi. I have a strange behaviour from about 48h by an UF, a single one. 1) On UF both metrics and splunkd logs events, NO ERRORS! Connections to outputs is OK! 2) UF has not been touched in last 48h, same conf / same addons / same ALL 3) UF has been updated to clean 7.2.0, but problem permains rolled back to previous version... 4) All inputs are sent, _internal (metrics.log/splunkd.log) NOT from 48h!!! 5) I still clean log dir on UF from rotated *.? and online metrics and splunkd, and restarted!!! No way!!! 6) Deleted addons, and redeployed. No way!!! _internal are missing!!! Any idea? Thanks.
Need help with a Splunk query  to display % failures  % failures = A1/A2 *100 A1= Total number of events returned by the below query: index="abc"  "searchTermForA1"   A2= Total number of events ... See more...
Need help with a Splunk query  to display % failures  % failures = A1/A2 *100 A1= Total number of events returned by the below query: index="abc"  "searchTermForA1"   A2= Total number of events returned by the below query: index="xyz"  "searchTermForA2" Please help with the query. Thanks!