All Topics

Top

All Topics

I followed the setup instructions, but it seems I am missing something. I have made sure I have all the required apps installed, but for some reason I am getting 2 sections that are showing red. I wo... See more...
I followed the setup instructions, but it seems I am missing something. I have made sure I have all the required apps installed, but for some reason I am getting 2 sections that are showing red. I would think that since most sources are "CIM compliant data" that the two red sections should be as well. Any issues or additional information I may be missing? I did accelerate all required sources listed as well.  
I have the Cisco Cloud Security app installed. Add-on is configured with data input and confirmed data is synced and contained within the configured index.   When setting up the Cisco Cloud Secur... See more...
I have the Cisco Cloud Security app installed. Add-on is configured with data input and confirmed data is synced and contained within the configured index.   When setting up the Cisco Cloud Security App, under Umbrella Settings on the Application Settings page, I cannot select an index from any of the dropdown boxes.
When using RapidDiag I either get bumped to a "Something went wrong! Click here to return to Splunk homepage. TypeError: Cannot read properties of undefined (reading 'proxy_to') "  error after trying... See more...
When using RapidDiag I either get bumped to a "Something went wrong! Click here to return to Splunk homepage. TypeError: Cannot read properties of undefined (reading 'proxy_to') "  error after trying to run a report. Other times it runs a report and I navigate to the Task Manager tab and when I click to go to the next page on the Task Manager tab, I get the same error. Can someone please point me in the right direction to resolve this?  This is a single server install of splunk enterprise running v9.0.0    
Hello! I am looking for a way to override the built-in Trigger Condition for Notable Response Actions, "For each result". I'd like Notable Response Actions to only be triggered "Once" so results/... See more...
Hello! I am looking for a way to override the built-in Trigger Condition for Notable Response Actions, "For each result". I'd like Notable Response Actions to only be triggered "Once" so results/events are more consolidated to work with my other tools more efficiently. See the screenshot image. It notes that "Notable response actions and risk response actions are always triggered for each result" despite the Trigger being set to "Once": Is there anyway to override this for Notable Response Actions to be triggered once as configured? Thanks for your help! (This setting is found under the Configure -> Content -> Content Management settings after selecting a specific security alert to edit).  
Hello folks, I'm trying to write a drill-down search for a correlation search in Enterprise Security, and I'm having trouble extracting/accessing a field in my event. Each event has a collection of... See more...
Hello folks, I'm trying to write a drill-down search for a correlation search in Enterprise Security, and I'm having trouble extracting/accessing a field in my event. Each event has a collection of conditional access policies stored in an array, with each value in the array being a collection of key-value pairs. With syntax highlighted, it looks like this (with some policies not expanded):  Here's what it looks like in the raw text:        "appliedConditionalAccessPolicies": [{"id": "xxx", "displayName": "xxx", "enforcedGrantControls": [], "enforcedSessionControls": [], "result": "notEnabled"}, {"id": "xxx", "displayName": "xxx", "enforcedGrantControls": ["Mfa"], "enforcedSessionControls": ["CloudAppSecurity"], "result": "notApplied"}, {"id": "xxx", "displayName": "xxx", "enforcedGrantControls": ["RequireApprovedApp"], "enforcedSessionControls": [], "result": "notApplied"}, ...]         We've extracted the fields: appliedConditionalAccessPolicies{}.displayName appliedConditionalAccessPolicies{}.enforcedGrantControls appliedConditionalAccessPolicies{}.enforcedSessionControls appliedConditionalAccessPolicies{}.id appliedConditionalAccessPolicies{}.result However, because we have 13 different policies in the appliedConditionalAccessPolicies array, every event contains every possible value of each of these fields. We don't have a way to associate values from the same index of the array together. Many of these policies are tests, which I don't care about. I really only care about two of them, and I would like to find a way to access at least the displayName and result of only those two policies. It would also be nice to access the enforcedGrantControls and enforcedSessionControls of the policies, but those are less critical to my search. Is there a way I can index into this array in my search to pull two specific displayName and result values out and use them, for example with a stats command? Thanks in advance for your help!  
I have a result of Vulneraries Scan of Quater1, Quater2 , Quarter3 and the remediate scan result of each Quarter ... all are add to Splunk by upload as csv file.  After added I got these:  host="SP... See more...
I have a result of Vulneraries Scan of Quater1, Quater2 , Quarter3 and the remediate scan result of each Quarter ... all are add to Splunk by upload as csv file.  After added I got these:  host="SPL-SH-DC"  sourcetype="****"  source="*****.CSV" and  field  IP_Address,Plugin_Name,Severity,Protocol,Port,Exploit,Synopsis,Description,Solution,See_Also,CVSS_V2_Base_Score,CVE,Plugin I want a  reports with these three status " New Active Vulnerabilities", "Fixed" and  "Active Vulnerabilities" base on joining  with these 7 fields: IP_Address, Plugin, Plugin_Name, Severity, Protocol, Port, Exploit I will be apricated for your contribution. Ritheka kan
Hi,  I have Json data for github repos in below format      { data: { [-] clone_count: 0 clone_uniques: 0 view_count: 7 view_uniques: 4 } date: 2022-06-28 rep... See more...
Hi,  I have Json data for github repos in below format      { data: { [-] clone_count: 0 clone_uniques: 0 view_count: 7 view_uniques: 4 } date: 2022-06-28 repository: projectA }     this data will be pushed every day for 2 repos , now i want to create a dashboard  which has month on x-axis that shows data of sum(clone_count) sum(view_count) for both 2 repos monthly as shown below  my query is looking like this      index=test source="data" | where repository in ("binary","manifest") | eval data_date = split(date,"-") | eval data_year=mvindex(data_date,0) | eval data_month=mvindex(data_date,1) | eval current_year=strftime(now(),"%Y")|eval current_month=strftime(now(),"%m")| spath output=clonecount path=data.clone_count |spath output=viewcount path=data.view_count | stats sum(clonecount) as Totalclonecount ,sum(viewcount) AS Totalviewcount by repository,data_month        
After upgrading to Splunk 9.0 on a single instance, we occasionally get KV Store errors.    CLI status shows: This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVers... See more...
After upgrading to Splunk 9.0 on a single instance, we occasionally get KV Store errors.    CLI status shows: This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [connection closed calling ismaster on '127.0.0.1:8191'] guid : E8254C08-B854-426C-B66D-7072D625D0F6 port : 8191 standalone : 1 status : failed storageEngine : mmapv1   I've looked on Splunk.com and googled but haven't found anything on single instances beyond reinstalling 9.0 which I've done twice.
When scanning an endpoint in SOAR how to you get a credential scan? I can start a scan via SOAR playbook but its not a credential scan.
Hi, Security alert: Splunk Universal Forwarder. Is this a customer installable upgrade (to version 9), or do I need to get my Splunk Partner to install it for me? Is there a procedure document an... See more...
Hi, Security alert: Splunk Universal Forwarder. Is this a customer installable upgrade (to version 9), or do I need to get my Splunk Partner to install it for me? Is there a procedure document anywhere?   Mant thanks
mogod command line argument having clear text password like "--sslPEMKeyPassword=password" how to avoid clear text password while calling splunkd.exe service ?   Thanks
See title. Searches still run but it constantly throws that error during every search.
Hi All, From one single value panel to another  single value( which needs to be hidden drilldown), can we have arrows .. so it will look like tree structure 
Hi team I need to apply a different condition in one panel of my dashboard depending on a selected column in other panel I am trying to assign a token value in my first panel depending on the col... See more...
Hi team I need to apply a different condition in one panel of my dashboard depending on a selected column in other panel I am trying to assign a token value in my first panel depending on the column I just select, and then use this token value in order to apply a specific condition in my second panel.   My first question is: What should be the token value that I have to set to my token in order to identify the clicked column? I mean, what token value I can select: $click.name$ $click.name2$   My second question is: How can I configure my query in the second panel in order to take in account the token value, and apply the specific condition based on the token value   Thanks in advance
Recently deployed this add-on, but it doesn't seem to bring back Traffic or URL logs like we did when using the TA-meraki & syslog. Are these not supported with the API-based mechanism, or is there... See more...
Recently deployed this add-on, but it doesn't seem to bring back Traffic or URL logs like we did when using the TA-meraki & syslog. Are these not supported with the API-based mechanism, or is there something I'm missing - like a setting on the Meraki end to include these logs? Thanks, Gord T.
I've setup Kinesis Firehose to push to Splunk HEC which is ingesting fine, however, I would like to add the logstream field from Cloudwatch to Splunk.   The code used in the Lambda can be found here:... See more...
I've setup Kinesis Firehose to push to Splunk HEC which is ingesting fine, however, I would like to add the logstream field from Cloudwatch to Splunk.   The code used in the Lambda can be found here: https://github.com/ptdavies17/CloudwatchFH2HEC        sourcetype = os.environ['SPLUNK_SOURCETYPE'] return_message = '{"time": ' + str(log_event['timestamp']) + ',"host": "' + \ arn + '","source": "' + filterName + ':' + loggrp + '"' return_message = return_message + ',"sourcetype":"' + sourcetype + '"' return_message = return_message + ',"event": ' + \ json.dumps(log_event['message']) + '}\n' return return_message + '\n'         The code block above works and it returns the formatted message:       { "time": 1234567891011, "host": "arn", "source": "some_source", "sourcetype": "some_source_type", "event": "event_data" }        When I add the logstream to the code block as such:       sourcetype = os.environ['SPLUNK_SOURCETYPE'] return_message = '{"time": ' + str(log_event['timestamp']) + ',"host": "' + \ arn + '","source": "' + filterName + ':' + loggrp + '"' return_message = return_message + ',"logstream":"' + logstrm + '"' return_message = return_message + ',"sourcetype":"' + sourcetype + '"' return_message = return_message + ',"event": ' + \ json.dumps(log_event['message']) + '}\n' return return_message + '\n'       The changes get processed by the transformation Lambda correctly( it is a valid json and the logs in Cloudwatch confirm that):       { "time": 1234567891011, "host": "some_host", "source": "some_source", "logstream": "logstream_in_amazon", "sourcetype": "some_source_type", "event": "some_event_data" }       But the Kinesis Delivery Stream to Splunk errors out with:       The data is not formatted correctly. To see how to properly format data for Raw or Event HEC endpoints, see Splunk Event Data (http://dev.splunk.com/view/event-collector/SP-CAAAE6P#data). HecServerErrorResponseException{serverRespObject=HecErrorResponseValueObject{text=Invalid data format, code=6, invalidEventNumber=0}, httpBodyAndStatus=HttpBodyAndStatus{statusCode=400, body={"text":"Invalid data format","code":6,"invalid-event-number":0}...       Is there something I'm missing? I've tried tons of changes to the Lambda code, but all of them return this error. Maybe I cannot add a field from CW to Splunk this way? I am new to Splunk, so I might not be asking the right question or might be missing something, but any help would be much appreciated.
In the context of connecting Splunk Cloud and Phantom. Does Phantom/Splunk SOAR support mTLS?
I want to create a drilldown that searches for the timezone displayed by the Timechart command. How should I create the time range specification for the drilldown setting item?
I would like to search from 600 seconds before to 600 seconds after the time specified in the time picker on the dashboard. Is there a good idea? The time picker is also used in another panel, so ... See more...
I would like to search from 600 seconds before to 600 seconds after the time specified in the time picker on the dashboard. Is there a good idea? The time picker is also used in another panel, so it is in response to the request that I didn't want to place multiple time pickers. === This SPL did not pass. === index = _internal earliest = $ field1.earliest $ --600 latest = $ field1.earliest $ + 600
Hello, I have created a few indexes, each containing data only from one source with one sourcetype. From a search performance point of view - Is it necessary to include the sourcetype in each searc... See more...
Hello, I have created a few indexes, each containing data only from one source with one sourcetype. From a search performance point of view - Is it necessary to include the sourcetype in each search if there is only one sourcetype "associated" with a specific index ? Is Splunk's internal "engine" slower when I do not specify that ?