All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello there, Does any one know how easily can Splunk Add on Applications can be tracked from a health check pov on when versions are due to go out of date / non supported and versions that are suppo... See more...
Hello there, Does any one know how easily can Splunk Add on Applications can be tracked from a health check pov on when versions are due to go out of date / non supported and versions that are supported?  As at present i cannot see a central repository / area that provides this information or a health check plug in that provide this as dashboard in Splunk? How do you mange your vulnerabilities / health checking on splunk add-on applications?
Suppose i have some process to run to give input and output count based on that we were calculating rejection percentage, if the rejection percentage is less than 20 then i need to display good if re... See more...
Suppose i have some process to run to give input and output count based on that we were calculating rejection percentage, if the rejection percentage is less than 20 then i need to display good if rejection percentage is more than 20 so ...bad...and of the corresponding input output not available...then want to display in progress....plz help to implement  ...input and output value we will be getting from data model
Hi Splunk Experts, Below is a sample event, I have below spath msg.message.details, I am trying to extract certain  fields from the details datapath. How can I extract 'msg.message.details' into fie... See more...
Hi Splunk Experts, Below is a sample event, I have below spath msg.message.details, I am trying to extract certain  fields from the details datapath. How can I extract 'msg.message.details' into fields?, I am still a newbie and learning on the go in splunk world, I am guessing to use rex, but is there a way using spath? Our index has structured other json paths eg:y has other spath eg:msg.message.header.correlationId, etc,  { [-] cf_app_id: test123 cf_app_name: test event_type: LogMessage job_index: ebcf8d13 message_type: OUT msg: { [-] level: INFO logger: UpdateContact message: { [-] details: Data{SystemId='null', language='English', parentSourceSystemAction='null', contactId='cf4cae75-28b3', status='Active', birthDate='1991-01-15', eventAction='Create', Accounts=[CustomerAccounts{ Case='000899', accountid='4DA4F29E', contactRelationship=ContactRelationship{expiryDate='', contactType='owner', endDate=''}}],workContact=WorkContact{faxNumber='null', mobileNumber='null', emailAddress='null', phoneNumber='null'},homeContact=HomeContact{faxNumber='null', mobileNumber='null', emailAddress='', phoneNumber='null'},businessAddress=null,personalAddress=[PersonalAddress{addressId='9205', locality='PARK', internationalPostCode='null', internationalState='null', additionalInfo='null', isPrimary='Y', streetNumberStart='null', addressType='null', status='CO', streetNumberStartSuffix='null', postalCode='765', streetNumberEnd='null', streetName='null', country='null', streetNumberEndSuffix='null', streetType='null', state='null', subAddress=SubAddress{buildingName='null', numberStart='null', addressLines=[MIL PDE,], details=[Details{value='null', detailType='null'}, Details{value='null', detailType='null'}]}}],idv=Identification{doc=License{state='null', number='null'}}} header: { [-] correlationId: 707000J-52f6-10df-00f3-f859-1c5ed entityId: cf75-2b3-cb38-cef-a72ad88 entityName: test errorCode: null errorMessage: null eventName: testevent processName: process1 processStatus: SUCCESS serviceName: testservice serviceType: Dispatch } } timestamp: 2021-07-20 } origin: rep timestamp: 1626764261880766200 } Any help is much appreciated. Thanks
Good afternoon! I have Palo Alto generating logs and redirecting them to Splunk, I am wanting to use Palo Alto Networks but I can't get it to work correctly, due to the configurations followed, the ... See more...
Good afternoon! I have Palo Alto generating logs and redirecting them to Splunk, I am wanting to use Palo Alto Networks but I can't get it to work correctly, due to the configurations followed, the only thing I just got is that it shows me the logs by Realtime Event Feed, but I I would like to understand and understand how Splunk and this Add from Palo Alto work, how to configure it, how to manage it since I cannot find a documentation that explains it very well, one of the things I would like to do is that the information of Palo Alto also appear in GlobalProtect etc, but I would like to understand how it works and how to redirect the information to the GlobalProtect window or well, understand concepts, thank you very much in advance!  
I have some events data in which I have fields like Eventid, EventTime, EventRunId, AccountID etc. As per my use case I need to export the search result data to AWS S3 and for that I am using "Even... See more...
I have some events data in which I have fields like Eventid, EventTime, EventRunId, AccountID etc. As per my use case I need to export the search result data to AWS S3 and for that I am using "Event Push by Deductiv" app. In this app I have written below query and that is converting all search results to .csv file and send to S3 bucket.         source="events.csv" host="pool150.info.com" sourcetype="csv" | s3ep credential=default_password outputfile="eventcsv.csv" outputformat=csv compression=false fields="accountId,eventId,eventrunId,EventTime"         My purpose is to write a search query through which I can export search results in incremental way. We can create an alert running every hour. For ex. Let's say I have searched data just now (15:00 GMT+5:30 ) and I got a record with latest EventTime 14:53 GMT+5:30. If I search the data again after 1 hour ( 16:00 GMT +5:30 ) then It should bring only records with EventTime > 14:53 PM GMT+5:30 and EventTime<=16:00 GMT +5:30 . We might need a variable to store the latest event time somewhere and then compare to the upcoming records.Also, in case the splunk server is restarted or faced breakdown then after the server is up, it should pick up/persist the latest EventTime to carry on the incremental search. My approach may be different and you can have some better solution as well. Please help me.  
Hi at all, I have to monitor a vcenter system (VM-Ware). I tried to download the app from Splunkbase, but it's at end of life. I read that it will be replaced with the IT Essentials Work and I dow... See more...
Hi at all, I have to monitor a vcenter system (VM-Ware). I tried to download the app from Splunkbase, but it's at end of life. I read that it will be replaced with the IT Essentials Work and I downloaded it. It seems ITSI! Can anyone explain the differences between this app and ITSI? (I already know that this is free and ITSI is a Premium App!). Had anyone experienced this App (that has 0 downloads!) Ciao. Giuseppe
Hi,I have a dns log whose fields are not extracted properly and so I used Rex. I encountered a problem. When i search index = dns * source = "516" host = dns -sender All fields are extracted correct... See more...
Hi,I have a dns log whose fields are not extracted properly and so I used Rex. I encountered a problem. When i search index = dns * source = "516" host = dns -sender All fields are extracted correctly. But when i search | "from datamodel:" Network_Resolution | search dns -sender My fields get value of unknown. Can anyone help me !!!!
Hello! Can anyone please help how to know if we ran an alert/not for a scheduled alert?  We set the below alert for every Monday 6:00 am. Alert Example: | makeresults | eval ip_ports = "10.120.1... See more...
Hello! Can anyone please help how to know if we ran an alert/not for a scheduled alert?  We set the below alert for every Monday 6:00 am. Alert Example: | makeresults | eval ip_ports = "10.120.121.100:9443" | eval ip_ports = split(ip_ports,",") | mvexpand ip_ports | rex field=ip_ports "(?<dest>[^:]+):(?<dest_port>\d+)" | table dest dest_port | lookup sslcert_lookup dest dest_port | eval days_left = round(ssl_validity_window/86400) | eval ssl_end_time=strftime(ssl_end_time,"%Y-%m-%d") | eval ssl_start_time=strftime(ssl_start_time,"%Y-%m-%d") | where days_left < 60  
I am getting the error below "File will not be read, seekptr checksum did not match (file=<file name>0). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen ... See more...
I am getting the error below "File will not be read, seekptr checksum did not match (file=<file name>0). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info." I have crcSalt = <SOURCE> in the inputs.conf what more can I do?
Can I specify app name in Splunk query and run that query from any app ?
I want to integrate data from a Splunk App to the Vuln centre in Enterprise Security. Has anyone done this before?
  bin _time span=1h | stats count(eval(eventDay==curDay)) AS cv by uid | stats count(eval(eventDay!=curDay)) AS ce by _time, uid   The following commands returns with null value in  "cv"  @Lowell... See more...
  bin _time span=1h | stats count(eval(eventDay==curDay)) AS cv by uid | stats count(eval(eventDay!=curDay)) AS ce by _time, uid   The following commands returns with null value in  "cv"  @Lowell @Hazel  @ramdaspr @ITWhisperer
My tstats search is basically this:   | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where sourcetype="<blah>*" by _time | timechart span=1d sum(... See more...
My tstats search is basically this:   | tstats summariesonly=t fillnull_value="MISSING" count from datamodel=Network_Traffic.All_Traffic where sourcetype="<blah>*" by _time | timechart span=1d sum(count)   I have used this search to create 3 seperate charts as per the below image. Question: How can I overlay all 3 timecharts together? Is there SPL for this?
I have a spreadsheet today that I use to display current incident status and would like to have the Splunk Dashboard associated to that application embedded to display for everyone to see.  Is this p... See more...
I have a spreadsheet today that I use to display current incident status and would like to have the Splunk Dashboard associated to that application embedded to display for everyone to see.  Is this possible?  Do I just put the Splunk Dashboard URL into a Cell? Or do I create a PIVOT? Looking for a little assistance if this is possible.
Hi Splunk Team. Can I use variable reference in To: field of an email alert? I have a distribution_list variable associated with my sourcetype and it is set to correct email address depending on dat... See more...
Hi Splunk Team. Can I use variable reference in To: field of an email alert? I have a distribution_list variable associated with my sourcetype and it is set to correct email address depending on date and time.   U put $result.distribution_list$ in the To: field, but it does not send email. Thanks Michal
Hello All, I am trying  to create a dashboard with an interactive input search bar. The field that is searchable has always 16 digits but I want the users to only search with the 6 first digits, for... See more...
Hello All, I am trying  to create a dashboard with an interactive input search bar. The field that is searchable has always 16 digits but I want the users to only search with the 6 first digits, for example they will inform a number 123456, and whatever occurrence with the "123456" will show in the report. I tried to use the sufix field, putting an "*" but it failed, I also tried to include the "*" in the search with the token but it also failed. Is there any way to do it?   Thank you in a dvancne!
We are getting The search job terminated unexpectedly error on pannel while running the dashboard, we have checked the resources on roles doesn't seems to be an issue on limitations. Below are the er... See more...
We are getting The search job terminated unexpectedly error on pannel while running the dashboard, we have checked the resources on roles doesn't seems to be an issue on limitations. Below are the error we are getting on splunkd.log   07-19-2021 18:54:38.615 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:27.882 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__RMD5f81a55cc52a3ee52_1626720743.187_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:39.832 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search7_1626720743.296_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:52.889 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search10_1626720743.186_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:52:52.950 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search3_1626720742.182_5BAC1863-AF7D-4C2E-BFC3-AB06CDD436D5 not found'</msg>\n </messages>\n</response>\n" 07-19-2021 18:54:38.615 +0000 ERROR SHCMasterHTTPProxy - Low Level HTTP request failure err=failed method=POST path=/services/shcluster/captain/artifacts/lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8/add_target captain=splunk-site1-search-head03:8089 rc=0 actual_response_code=500 expected_response_code=200 status_line="Internal Server Error" transaction_error="<response>\n <messages>\n <msg type="ERROR">failed on report target request aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 err='event=SHPMaster::addTarget aid=lydimart__lydimart_Q0lTQ08tVUNNLUNsb3VkLURhc2hib2FyZHM__search9_1626720743.295_70D6C089-B900-44A7-887A-6EAB416257C8 not found'</msg>\n </messages>\n</response>\n"
Hi, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before?   Thanks... See more...
Hi, I am looking on generating a search to find the 1% slowest requests from IIS logs however I am not sure if this is possible, just wondered if anyone has done something similar before?   Thanks   Joe
Hi All, One of our team mate, disabled and enabled some apps on SH, post which we are seeing Next Scheduled Time for all alerts is showing none for all the alerts in SH, in alert setting we can see ... See more...
Hi All, One of our team mate, disabled and enabled some apps on SH, post which we are seeing Next Scheduled Time for all alerts is showing none for all the alerts in SH, in alert setting we can see that cron job is scheduled, due to which none of the alerts are triggering from our SH, can anyone suggest which setting or parameter to be checked , below screenshot FYR  
Hi all, I recently uploaded the Splunk Add-on for New Relic 2.2.0 version and it got rejected by the Splunk vetting process. There are no errors but 10 warnings. Any idea how I can get this working... See more...
Hi all, I recently uploaded the Splunk Add-on for New Relic 2.2.0 version and it got rejected by the Splunk vetting process. There are no errors but 10 warnings. Any idea how I can get this working for Splunk Cloud?  We need this working to get the log and APM details from New Relic into Splunk, so if there are alternatives out there I could use some hep around that too. Thank you Vinod