All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as my... See more...
Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as myCount by _time, zbpIdentifier | eval _time=strftime(_time,"%Y %m %d") | chart values(myCount) over zbpIdentifier by _time limit=0 useother=f produces the following chart:   For each zbpIdentifier I have a group within the graph showing the number of messages during several days.  How to change the order of the day values within the group? Green (yesterday) should be the most left, followed by pink (the day before yesterday) and orange, ..... | reverse will change the order of the whole groups, that's not what I need.  All kind of time sorting like  | sort +"_time" or | sort -"_time" before and after  | chart ...  does not change anything.
After bumping in to the "Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64" problem, we too were f... See more...
After bumping in to the "Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64" problem, we too were forced to double our max_content_length value on our search head cluster members. Upon closer look on "why is this app so big" I could see that several files unter bin/linux_x86_64/4_2_2/lib are actually duplicated. What we usually see as symlink to the library files, are current copies of the same file. After tweaking a little bit on those files, I guess we can reduce around 500MB of duplicated library files and bring it below the accepted standard of 2GB. Maybe someone overlooked that while compiling and packaging the app?
Please try to set "useRegexp: true" like below. It shouldnt be a wildcard problem. logsCollection: containers: multilineConfigs: - namespaceName: value: app1-de... See more...
Please try to set "useRegexp: true" like below. It shouldnt be a wildcard problem. logsCollection: containers: multilineConfigs: - namespaceName: value: app1-dev podName: value: app1.* useRegexp: true containerName: value: app1 firstEntryRegex: ^(?P<EventTime>\d+\-\w+\-\d+\s+\d+:\d+:\d+\.\d+\s+\w+) - namespaceName: value: app2-* useRegexp: true podName: value: .* useRegexp: true containerName: value: .* useRegexp: true firstEntryRegex: /^\d{1}\.\d{1}\.\d{1}\.\d{1}\.\d{1}/|^[^\s].*  
You can use below code to reload DS from splunklib.binding import HTTPError import splunk.clilib.cli_common as scc def trigger_reload_deploy_server(self, sc=None): """Triggers the 'reload d... See more...
You can use below code to reload DS from splunklib.binding import HTTPError import splunk.clilib.cli_common as scc def trigger_reload_deploy_server(self, sc=None): """Triggers the 'reload deploy-server' command via the Splunk REST API.""" try: # Trigger the 'reload deploy-server' command using the REST API log("INFO","Triggering 'reload deploy-server' command.",file_name="getdiaginfo") splunkd_uri = scc.getMgmtUri() session_key = self.service.token endpoint = splunkd_uri + "/services/deployment/server/config/_reload" headers = {"Authorization": f"Splunk {session_key}", "Content-Type": "application/json"} # Prepare the request body if not sc: body = '' else: body = urllib.parse.urlencode({"serverclass": sc}) update_response = requests.post(endpoint, headers=headers, data=body, verify=False) if update_response.status_code == 200: log("INFO", "--> 'reload deploy-server' command triggered successfully.", file_name="getdiaginfo") return True else: log("ERROR", f"Failed to trigger 'reload deploy-server'. Status: {update_response.status_code}, Response: {update_response.text}", file_name="getdiaginfo") return False except HTTPError as e: log("ERROR",f"Failed to trigger 'reload deploy-server': {e}",file_name="getdiaginfo") return False @Configuration() class Getdiaginfo(GeneratingCommand): server_list = Option(require=False) def generate(self): trigger_reload_deploy_server(self, <"serverclass-name">) dispatch(Getdiaginfo, sys.argv, sys.stdin, sys.stdout, __name__)
In theory and as a best practice it should but it is unlikely that all the DC will be on same version as DS unless all the best practice are followed. Either it will be on same or lower version than ... See more...
In theory and as a best practice it should but it is unlikely that all the DC will be on same version as DS unless all the best practice are followed. Either it will be on same or lower version than DS.
I disagree. The DS should be the highest version of any machine in the Splunk infrastructure (if Splunk Best Practice is adhered to). If the app is scanned and found to be compatible with that on the... See more...
I disagree. The DS should be the highest version of any machine in the Splunk infrastructure (if Splunk Best Practice is adhered to). If the app is scanned and found to be compatible with that on the DS, then, in theory, the app should be compatible with the version of Splunk on the DS (or later, if the tests are forward-looking)?
Can try this setting if it helps - syslogseverity severity OUTPUTNEW severity AS severity_auto severity_desc Please refer to https://community.splunk.com/t5/Splunk-Cloud-Platform/Why-can-I-not-expan... See more...
Can try this setting if it helps - syslogseverity severity OUTPUTNEW severity AS severity_auto severity_desc Please refer to https://community.splunk.com/t5/Splunk-Cloud-Platform/Why-can-I-not-expand-lookup-field-due-to-a-reference-cycle-in/m-p/563011/highlight/true#M1055
I agree with Giuseppe as the suggestion is good to start with. However, in MLTK you can use the outlier detection example shipped with the app. Can created a search split by source which will show th... See more...
I agree with Giuseppe as the suggestion is good to start with. However, in MLTK you can use the outlier detection example shipped with the app. Can created a search split by source which will show the sources responsible for the data growth as an outlier.
Salam Splunkers, I’m having a problem while configuring the Windows Remote Management app with the Splunk SOAR platform. When testing the connectivity with the transport type set to NTLM, it fa... See more...
Salam Splunkers, I’m having a problem while configuring the Windows Remote Management app with the Splunk SOAR platform. When testing the connectivity with the transport type set to NTLM, it fails and displays an error message. Following the error message, I disabled FIPS mode on the Windows Server and tested the connectivity again, but the issue persists. I then changed the transport type to Kerberos, but ran into a different issue. I have a  few questions: Is the targeted system for integration with this app the Windows Server or the Windows\Linux endpoint? Do we need to integrate the Windows Server itself in order to access the endpoints listed under the AD domain of that Windows Server with this app? Any guidance would be appreciated!
Try this regex - Student.*?country\"\:\"(?<country>[\w]+)\"  
its been 24 hours since i registered for free trail Splunk cloud,  in message they said we will receive the email within 15 min.
There is no requirement in scanning the deployment-apps folder as those apps are anyways going to get deployed on the deployment clients and can be scanned while upgrading the deployment client. The ... See more...
There is no requirement in scanning the deployment-apps folder as those apps are anyways going to get deployed on the deployment clients and can be scanned while upgrading the deployment client. The compatibility issue with the Splunk version won't exist when the app is present in the deployment-app folder. I hope this helps.
Can check from the Azure end. Refer https://techcommunity.microsoft.com/blog/azurepaasblog/public-access-is-not-permitted-on-this-storage-account/3521288
In the Splunk URA, it says that it includes the /etc/apps and /etc/peer-apps folders in the scans, but it does not include the deployment-apps folder as well. Therefore, the process for scanning app... See more...
In the Splunk URA, it says that it includes the /etc/apps and /etc/peer-apps folders in the scans, but it does not include the deployment-apps folder as well. Therefore, the process for scanning apps in the deployment-apps folder is to find these in other places within the environment where SplunkWeb is running and then install/update and run there.  There are more and more companies who are using SplunkCloud and the on-prem presence of Splunk is now mostly managed by the Splunk DS, so why can we not have the ability (in the Splunk URA) to scan the deployment-apps folder, so that it makes on-prem upgrades easier?
I was able to install v9.3.2 without SSLEAY32.dll and LIBEAY32 .dll errors  But now our Splunk Secure Gateways has stopped working Solving one issue and auto introduced to another issue ... See more...
I was able to install v9.3.2 without SSLEAY32.dll and LIBEAY32 .dll errors  But now our Splunk Secure Gateways has stopped working Solving one issue and auto introduced to another issue Unable to initialize modular input "ssg_subscription_modular_input" defined in the app "splunk_secure_gateway": Introspecting scheme=ssg_subscription_modular_input: script running failed (exited with code 1)..11-12-2024, 10:03:22   websocket:   Error in 'checkssgmobilewss' command: External search command exited unexpectedly with non-zero error code 1. HTTPS(Sync): OK HTTPS( Async): Error in 'checkssgmobileasync' command: External search command exited unexpectedly with non-zero error code 1. Does somebody has a suggestion what to do next? I am pretty much lost here, for the moment.  AshleyP
Thank you for your thoughts, colleagues. I will check the idea of @dural_yyz that mentioned the absence of an EOF marker. @PickleRick , talking about permission, I'm pretty sure that this is not th... See more...
Thank you for your thoughts, colleagues. I will check the idea of @dural_yyz that mentioned the absence of an EOF marker. @PickleRick , talking about permission, I'm pretty sure that this is not the case because about a month ago I found out that new Splunk UFs started to use "USE_LOCAL_SYSTEM = 0" by default during silent install. Because of it I was observing something like this on the affected UF instances: 10-27-2024 21:50:16.756 +0300 ERROR TailReader [3644 tailreader0] - Unable to remove sinkhole file: path=E:\path\file.xml, errno=Access is denied.  
I am getting below error while using the script is their any mistake in placing the script shared by you ,   Previous script :  ------------------------- index=firewall (origin=10.254.17.* OR ori... See more...
I am getting below error while using the script is their any mistake in placing the script shared by you ,   Previous script :  ------------------------- index=firewall (origin=10.254.17.* OR origin=10.254.252.* OR origin=10.254.253.*) OR *VGUK* OR *VGBR* OR *VGCY* OR *VGIN* OR *VGRU* OR *VGMY* OR *VGKC* OR *EQX* OR *PDN* OR *VSHW* | search "state change: * -> Down" OR "state change: * -> Standby" OR "state change: * -> Active" | rex field=_raw "^(?:[^:\n]*:){5}\s+(?P<State_before>[^ ]+)\s+\->\s+(?P<State_after>\w+)" | dedup Cluster_name | stats count by host,State_after  
Hi Uma As far as I know you cant uses alias field, but that shouldn't be an issue, as you are creating the metric, just name it what you require then the metric will be named as you need it in the ... See more...
Hi Uma As far as I know you cant uses alias field, but that shouldn't be an issue, as you are creating the metric, just name it what you require then the metric will be named as you need it in the health rules and dashboards
  bowesmana  ,      appreciate your help, thank you so much the script works for me
Let me restate what you are trying to do. Select multiple values of prefix from the lookup. Perform the search that filters on values of IPC that equals to any of selected prefix. Is this correc... See more...
Let me restate what you are trying to do. Select multiple values of prefix from the lookup. Perform the search that filters on values of IPC that equals to any of selected prefix. Is this correct?  Based on your mock SPL, IPC is already extracted at search time.  You don't need a second pipe to search for it.  Let me first give you a mock dashboard using your search.  Then, I will show a demo dashboard using emulations to show how it works.   <form version="1.1"> <label>Multivalue input</label> <description>https://community.splunk.com/t5/Splunk-Search/Passing-a-mutiple-values-of-label-in-input-dropdown/m-p/706304</description> <fieldset submitButton="false"> <input type="multiselect" token="my_token" searchWhenChanged="true"> <label>select all applicable</label> <choice value="*">All</choice> <initialValue>*</initialValue> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query>| inputlookup site_ids.csv</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <delimiter>,</delimiter> </input> </fieldset> <row> <panel> <table> <title>token value: &gt;$my_token$&lt;</title> <search> <query>index=abc sourcetype=sc* IPC IN ($my_token$) | fields _time index Eventts FIELD* IPC</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   This should deliver the functionality you described.  Note I moved your filter into index search.  This is more efficient.  I also do not know why you list source in the first fields command but then remove this field in the last fields command.  So I also removed these. Anyway, let me demonstrate the functionality with an emulation of these events FIELD1 FIELD2 IPC 2 stuff 23456789 4 more stuff 78945612 6 stuff 2 12356789 8 even more stuff 56897412 5 and stuff 78945612 14 and more stuff 23456789 9 even more 12356789 Play with the following dashboard and compare with real data.   <form version="1.1"> <label>Multivalue input</label> <description>https://community.splunk.com/t5/Splunk-Search/Passing-a-mutiple-values-of-label-in-input-dropdown/m-p/706304</description> <fieldset submitButton="false"> <input type="multiselect" token="my_token" searchWhenChanged="true"> <label>select all applicable</label> <choice value="*">All</choice> <initialValue>*</initialValue> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query>| makeresults format=csv data="displayname,prefix abc12,23456789 qwe14,78945612 rty12,12356789 yuui13,56897412" ``` the above emulates | inputlookup site_ids.csv ```</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <delimiter>,</delimiter> </input> </fieldset> <row> <panel> <table> <title>token value: &gt;$my_token$&lt;</title> <search> <query>| makeresults | eval _raw="IPC, FIELD1, FIELD2 23456789, 2, stuff 78945612, 4, more stuff 12356789, 6, stuff 2 56897412, 8, even more stuff 78945612, 5, and stuff 23456789, 14, and more stuff 12356789, 9, even more" | multikv | search IPC IN ($my_token$) ``` the above emulates index=abc sourcetype=sc* IPC IN ($my_token$)``` | fields _time index Eventts FIELD* IPC</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   If I select abc12 and yyui13, I get _time index Events FIELD1 FIELD2 IPC 2024-12-10 23:32:17     2 stuff 23456789 2024-12-10 23:32:17     8 even more stuff 56897412 2024-12-10 23:32:17     14 and more stuff 23456789 This fits exactly what you describe.  In other words, I do not see any unexpected results when selecting multiple values.