All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to integrate data from a Splunk App to the Vuln centre in Enterprise Security. Has anyone done this before?
  bin _time span=1h | stats count(eval(eventDay==curDay)) AS cv by uid | stats count(eval(eventDay!=curDay)) AS ce by _time, uid   The following commands returns with null value in  "cv"  @Lowell... See more...
  bin _time span=1h | stats count(eval(eventDay==curDay)) AS cv by uid | stats count(eval(eventDay!=curDay)) AS ce by _time, uid   The following commands returns with null value in  "cv"  @Lowell @Hazel  @ramdaspr @ITWhisperer
My tstats search is basically this:   | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where sourcetype="<blah>*" by _time | timechart span=1d sum(... See more...
My tstats search is basically this:   | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where sourcetype="<blah>*" by _time | timechart span=1d sum(count)   I have used this search to create 3 seperate charts as per the below image. Question: How can I overlay all 3 timecharts together? Is there SPL for this?
I have a spreadsheet today that I use to display current incident status and would like to have the Splunk Dashboard associated to that application embedded to display for everyone to see.  Is this p... See more...
I have a spreadsheet today that I use to display current incident status and would like to have the Splunk Dashboard associated to that application embedded to display for everyone to see.  Is this possible?  Do I just put the Splunk Dashboard URL into a Cell? Or do I create a PIVOT? Looking for a little assistance if this is possible.
Hi Splunk Team. Can I use variable reference in To: field of an email alert? I have a distribution_list variable associated with my sourcetype and it is set to correct email address depending on dat... See more...
Hi Splunk Team. Can I use variable reference in To: field of an email alert? I have a distribution_list variable associated with my sourcetype and it is set to correct email address depending on date and time.   U put $result.distribution_list$ in the To: field, but it does not send email. Thanks Michal
Hello All, I am trying  to create a dashboard with an interactive input search bar. The field that is searchable has always 16 digits but I want the users to only search with the 6 first digits, for... See more...
Hello All, I am trying  to create a dashboard with an interactive input search bar. The field that is searchable has always 16 digits but I want the users to only search with the 6 first digits, for example they will inform a number 123456, and whatever occurrence with the "123456" will show in the report. I tried to use the sufix field, putting an "*" but it failed, I also tried to include the "*" in the search with the token but it also failed. Is there any way to do it?   Thank you in a dvancne!
We are getting The search job terminated unexpectedly error on pannel while running the dashboard, we have checked the resources on roles doesn't seems to be an issue on limitations. Below are the er... See more...
We are getting The search job terminated unexpectedly error on pannel while running the dashboard, we have checked the resources on roles doesn't seems to be an issue on limitations. Below are the error we are getting on splunkd.log   07-19-2021 18:54:38.615 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:27.882 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:39.832 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:52.889 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:52.950 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:54:38.615 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n"
Hi, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before?   Thanks... See more...
Hi, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before?   Thanks   Joe
Hi All, One of our team mate, disabled and enabled some apps on SH, post which we are seeing Next Scheduled Time for all alerts is showing none for all the alerts in SH, in alert setting we can see ... See more...
Hi All, One of our team mate, disabled and enabled some apps on SH, post which we are seeing Next Scheduled Time for all alerts is showing none for all the alerts in SH, in alert setting we can see that cron job is scheduled, due to which none of the alerts are triggering from our SH, can anyone suggest which setting or parameter to be checked , below screenshot FYR  
Hi all, I recently uploaded the Splunk Add-on for New Relic 2.2.0 version and it got rejected by the Splunk vetting process. There are no errors but 10 warnings. Any idea how I can get this working... See more...
Hi all, I recently uploaded the Splunk Add-on for New Relic 2.2.0 version and it got rejected by the Splunk vetting process. There are no errors but 10 warnings. Any idea how I can get this working for Splunk Cloud?  We need this working to get the log and APM details from New Relic into Splunk, so if there are alternatives out there I could use some hep around that too. Thank you Vinod
I was thinking about using API things. It's like monitoring the posts from an official Twitter account. Is it possible to achieve or ....
Question -  If I wanted to prevent SAML/SSO configurations from replicating to other SHs in a cluster, could I use the 'conf_replication_blacklist.<name>' or something similar to exclude the authent... See more...
Question -  If I wanted to prevent SAML/SSO configurations from replicating to other SHs in a cluster, could I use the 'conf_replication_blacklist.<name>' or something similar to exclude the authentication.conf? Or would that cause more issues outside of just preventing SAML/SSO configs to be unsyncd? Context -  We are migrating from onprem servers to AWS servers. The current configurations for SSO/SAML only work for the onprem servers, and we will need new configs for the AWS servers. The configs are in the etc/system/local/authentication.conf, so already at highest precendence. However, while working on those configurations we don't want to break the working SSO for onprem. We don't want to make it a separate cluster, cause then we'd have to get all the searches/lookups replicated across some other way. I came across the 'conf_replication_summary.blacklist' and 'conf_replication_include.<conf_file_name> = <boolean>' in the server.conf spec, and was wondering if anyone has any experience using these for authentication.conf and if there are complications I should be aware of? Cause if we could use these to temporarily pause the replication with no real ill effects, that'd be great.
Hi forum, I have a 2 peer single site (sf2, rf2) index cluster. We recognized that the primaries for indexes are not distributed even by using the search:     | rest splunk_server=local /serv... See more...
Hi forum, I have a 2 peer single site (sf2, rf2) index cluster. We recognized that the primaries for indexes are not distributed even by using the search:     | rest splunk_server=local /services/cluster/master/buckets | rex field=title "^(?<repl_index>[^\~]+)" | search repl_index="*" standalone=0 frozen=* | rename title AS bucketID | fields bucketID peers.*.search_state peers.*.bucket_flags frozen repl_index | rename peers.3DAB62DE-6D21-4C93-B8E5-A65370709B79.bucket_flags as bucketflags | eval prim=if(bucketflags = "0x0","prim_yes","prim_no") | stats count by repl_index prim | xyseries repl_index prim count | fillnull prim_yes,prim_no | eval ratio=prim_yes/(prim_yes+prim_no) | eval ratio=round(ratio*100,2) | search repl_index="*"       More or less all primaries are either on one indexer or the other, resulting in uneven load as we have a search hotspot on one index. We were able to have a far better distribution after we set sf=1, removed excess buckets and set sf=2 again. Unfortunatly after stop an indexer for a while or do a rolling restart it's again very uneven distributed (as seen on the first screenshot). it's also possible to get an even distribution when stopping clustermaster and peers at the same time and starting again - in this time we have data loss. restarting any component for it's own doesn't fix the issue. we tried to rebalance primaries using:     curl -k -u admin:plaseentercreditcardnumber --request POST https://localhost:8089/services/cluster/master/control/control/rebalance_primaries     any hints how to fix this? We are using v8.0.7. best regards, Andreas
Hi everyone, Is it possible to achieve this: My search has resulted in four columns Column1       Column2          Column3         Column4 ------                 -------                   ------- ... See more...
Hi everyone, Is it possible to achieve this: My search has resulted in four columns Column1       Column2          Column3         Column4 ------                 -------                   -------                ------- Type1             Source1            OK(status)      Item1 Type2             Source2            OK(status)      Item2 Type3             Source3            BAD(status)   Item3 Type4             Source4            OK(status)      Item4 Type5             Source5            BAD(status)    Item5 Type6             Source6            BAD(status)    Item6 I wish to send an email periodically with this text: At this time, Items: Item1, Item2, Item4 are OK, and Item3, Item5, Item6 are BAD. Is it possible to filter Items based on Column3 and get all fields in a single line in order to put them in a message which will also be part of the resulting query? If it is not possible to make both cases - OK and BAD in the same line, it would be nice to have only one working.
Hi everyone I'm using Splunk Security Essentials and I have a problem with a macro : "get_identity4events(user)" the error in the search is : "Error in 'SearchParser': The search specifies a macro... See more...
Hi everyone I'm using Splunk Security Essentials and I have a problem with a macro : "get_identity4events(user)" the error in the search is : "Error in 'SearchParser': The search specifies a macro 'get_identity4events' that can not be found. Reasons include: the macro name is misspelled, you do not have" read "permission for the macro, or the macro has not been shared with this application, Click Settings, Advanced search, Search Macros to view macro information. " the macro is missed in the list of macro. i test to create it but i don't find the content of this macro on the community or docs splunk. coult you help me please? thanks  Maxime
Environment- Single Splunk 7.3.9 search head / indexer with FIPS_MODE=1 etc/system/local/server.conf       [sslConfig] sslRootCAPath = $SPLUNK_HOME\etc\auth\mycerts\consolidatedCA.pem [kvstore]... See more...
Environment- Single Splunk 7.3.9 search head / indexer with FIPS_MODE=1 etc/system/local/server.conf       [sslConfig] sslRootCAPath = $SPLUNK_HOME\etc\auth\mycerts\consolidatedCA.pem [kvstore] serverCert = mycerts\kvstore_consolidated.pem sslPassword = <password_for_private_key>       the "kvstore_consolidated.pem" contains my private key, and the server cert. Issue: kvstore fails to start. (log below from splunkd.log)     07-19-2021 11:06:35.763 -0400 ERROR KVStoreConfigurationProvider - Could not get ping from mongod. 07-19-2021 11:06:35.763 -0400 ERROR KVStoreConfigurationProvider - Could not start mongo instance. Initialization failed. 07-19-2021 11:06:35.763 -0400 ERROR KVStoreBulletinBoardManager - KV Store changed status to failed. Failed to start KV Store process. See mongod.log and splunkd.log for details.. 07-19-2021 11:06:35.763 -0400 ERROR KVStoreBulletinBoardManager - Failed to start KV Store process. See mongod.log and splunkd.log for details.     mongod.log      2021-07-15T14:36:03.080Z E NETWORK [conn941] SSL peer certificate validation failed: unsupported certificate purpose 2021-07-15T14:36:03.080Z I NETWORK [conn941] Error receiving request from client: SSLHandshakeFailed: SSL peer certificate validation failed: unsupported certificate purpose. Ending connection from 127.0.0.1:52128 (connection id: 941)     so it seems like the server is trying to make loopback requests and trying to act as both the server and the client in SSL comms. In reading this , (while its not the same issue), the suggestion is to have the CA sign the CSR so its both client and server.  Before I go down this road (the CA I am using does not seem to support this- it can only sign as  either "user" or "server"), just want to see if anyone else have ran into this?  I also tried the server.conf settings in this article, but with same results: https://splunkcommunity.com/wp-content/uploads/2019/11/FIPSConf_Final.pdf
Hi, I have uploaded a JSON data from one of my APM tools into Splunk to get some meaningful insights. The events are there for every hour. Every event has 2 fields timeFrameStart & timeFrameEnd which... See more...
Hi, I have uploaded a JSON data from one of my APM tools into Splunk to get some meaningful insights. The events are there for every hour. Every event has 2 fields timeFrameStart & timeFrameEnd which are coming as Epoch time and I have converted them into human readable format. There is another fields called Visits which tells me how many visits were there in that hour. My requirement is to plot a hourly usage graph with time as x-axis (probably derive from timeFrameStart field) and visits on the y-axis. I've written a base query -   source="mydataarchive" host="splunkdev" index="test_index" | eval startTime=strftime(timeFrameStart/1000,"%a,%d %b %Y %H:%M:%S") | eval endTime=strftime(timeFrameEnd/1000,"%a,%d %b %Y %H:%M:%S") | table startTime endTime visits   Let me know if anyone can advice on this using stats or timechart command.
I have 2 query searches, one returns set result A and the other one returns set result B.  I would like to get the results of A/B (results that appear in A but do not appear in B ) To be more speci... See more...
I have 2 query searches, one returns set result A and the other one returns set result B.  I would like to get the results of A/B (results that appear in A but do not appear in B ) To be more specific: First query is: index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "some log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id)     The  query returns a list of correlation_id, such as: values(correlation_id) 11 22 33   Second query is almost identical (different log message) : index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "Other log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id)       So the result is in the same structure,  for example  values(correlation_id) 11 88     I would like a query which results in A/B, so in this case it should be  values(correlation_id) 22 33   I tried this query but it doesn't work: index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "some log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id) | search NOT in [search index="<...>" "attrs.pod"="<...> "attrs.env"=<...> "Other log message" "accountId=1234" | rex field=line "accountId=(?<account_id>[0-9]+).*correlationId=(?<correlation_id>[\w-]+)" | stats values(correlation_id)]      
I have a set of data that has a data field inside that shows when an asset has been assigned. Right now, we're pulling the total count of those assets, but have been asked to show an incrementing cou... See more...
I have a set of data that has a data field inside that shows when an asset has been assigned. Right now, we're pulling the total count of those assets, but have been asked to show an incrementing count over the course of a line chart. The data looks something like this: Asset ID Assigned Date 123 7/12/21 124 7/12/21 125 7/13/21 126 7/14/21   I want the data in the chart to show like this: 7/12/21: 2 7/13/21: 3 7/13/21: 4 Essentially, after each date from the start, the chart adds the previous dates, and charts the total.  Thanks in advance. 
Hi,  I am using the Threat Intelligence datamodel in my Splunk ES environment. It is being populated with a Threat Intel Feed source. I would now like to check if certain values from my searches exi... See more...
Hi,  I am using the Threat Intelligence datamodel in my Splunk ES environment. It is being populated with a Threat Intel Feed source. I would now like to check if certain values from my searches exist in the data model, so i can enrich correlation searches etc. I basically want to my searches to lookup the data model and output if the value exists or not, along with the matched value. For example,  i have a field named url  which will be returned from the following search: index="cisco_fmc" rec_type_desc="File Malware Event" eventtype=cisco_fmc_malware disposition=Malware I now want to add SPL to the above so it looks up the value of url  against the Threat Intel datamodel. The datamodel contains the standard two fields - threat_match_field which can be url, and threat_match_value which is the associated value. If present, I would like to add a new field to the output named match which should be set to "Yes" if present and "No" if not. Would also like to output the threat_match_value  itself to the output. Thanks.