All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @DCondliffe1 , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to ... See more...
Hello All, I am trying to build a open telemetry collector for splunk_hec receiver.  I am able to get it working and route the data to a tenant based on the token value sent in.  What I wanted to do was have a way to handle invalid tokens. Obviously I do not want to ingest traffic with an invalid token, but I would like visability into this. Is anyone aware of a way to log some sort of message to indicate that a bad token was sent in and what that token value was and log that to a specific tenant. Here is an example confiog like: - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "9ff3a68d-XXXX-XXXX-XXXX-XXXXXXXXXXXX") Can I do an else or a wild card value? - set(resource.attributes["log.source"], "otel.hec.nonprod.fm-mobile-backend-qa") where IsMatch(resource.attributes["com.splunk.hec.access_token"], "********-****-****-*********") Or some other way to log a message to the otel collector with info like host or ip and the token value that was sent?  I am just looking into gaining visibility into invalid token data sent. 
Hi @DCondliffe1 , probably it's an error, because there isn't any pre-built panel and any dashboard in this Add-on, also because, this is an Add-On and not an app. This is a Splunk Supported Add-On... See more...
Hi @DCondliffe1 , probably it's an error, because there isn't any pre-built panel and any dashboard in this Add-on, also because, this is an Add-On and not an app. This is a Splunk Supported Add-On s, open a case to Splunk Support for it. Ciao. Giuseppe
Please read the description above where it specifically mentions pre-built panels, there is also a utube video from Splunk showing a demo in which it demonstrates using pre-built panels.
I agree.  My last environment I managed had UF versions ranging from high 6.x to low 9.1.x.  Any upgrade readiness scans would lite up like a Christmas tree looking at the DS folder.
Hi @DCondliffe1 , where do you find that there are Pre-Built Panels in this Add-On? this is an add-on without any dashboards and any kind of interfaces, and this is confermed also viewing the folde... See more...
Hi @DCondliffe1 , where do you find that there are Pre-Built Panels in this Add-On? this is an add-on without any dashboards and any kind of interfaces, and this is confermed also viewing the folders. Ciao. Giuseppe
Hi @dorHerbesman , at first, you must edit the web-features.conf file not web.conf. Then you should try a value for each row: [feature:dashboards_csp] enable_dashboards_external_content_restrictio... See more...
Hi @dorHerbesman , at first, you must edit the web-features.conf file not web.conf. Then you should try a value for each row: [feature:dashboards_csp] enable_dashboards_external_content_restriction = true enable_dashboards_redirection_restriction = true dashboards_trusted_domain.endpoint1 = http://jenkins dashboards_trusted_domain.endpoint2 = https://jenkins as you can read at https://docs.splunk.com/Documentation/Splunk/9.3.2/Admin/Web-featuresconf#web-features.conf.example Ciao. Giuseppe
I cannot find the Pre-built panels in the Splunk Add-on for Apache Web Server Version 2.1.0.
i have a problem with the mention warning on my search head:(attached photo) i tried following the guide here: Configure Dashboards Trusted Domains List - Splunk Documentation and run : curl ... See more...
i have a problem with the mention warning on my search head:(attached photo) i tried following the guide here: Configure Dashboards Trusted Domains List - Splunk Documentation and run : curl -k -u admin:$password$ https://mysplunk.com:8000/servicesNS/nobody/system/web-features/feature:dashboards_csp -d dashboards_trusted_domain.exampleLabel=http://jenkins/ and got:  curl: (56) Received HTTP code 403 from proxy after CONNECT i tried running it on the splunk master and on some of the search heads and it didn't work. also tried  editting : /etc/system/local/web.conf with: [settings] dashboards_trusted_domains = http://jenkins https://jenkins and still the same error   what am i doing wrong? thanks in advanced to helpers!
Good day, I am trying to get a dashboard up and going to easily find the difference between two users groups. I get my information pulled from AD into splunk and then if user1 has a group that use... See more...
Good day, I am trying to get a dashboard up and going to easily find the difference between two users groups. I get my information pulled from AD into splunk and then if user1 has a group that user2 doesnt have then I can easily compare two users to see what is missing. Example users in the same department typically require the same access but one might have more privileges and that is what I want to see. So my search works fine, only problem is it only gives me the group difference and now I cant see who has that group in order to add it to the user that doesnt have the group. I want to add the user next to the group: example group user G-Google user1 G-Splunk user2 | set diff [ search index=db_assets sourcetype=assets_ad_users $user1$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | table Group ] [ search index=db_assets sourcetype=assets_ad_users $user2$ | dedup displayName sAMAccountName memberOf | makemv delim="," memberOf | mvexpand memberOf | rex field=memberOf "CN=(?<Group>[^,]+)" | where Group!="" | table Group ]  
Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as my... See more...
Dear experts My search index="abc" search_name="xyz" Umgebung="prod" earliest=-7d@d latest=@d zbpIdentifier IN (454-594, 256-14455, 453-12232) | bin span=1d _time aligntime=@d | stats count as myCount by _time, zbpIdentifier | eval _time=strftime(_time,"%Y %m %d") | chart values(myCount) over zbpIdentifier by _time limit=0 useother=f produces the following chart:   For each zbpIdentifier I have a group within the graph showing the number of messages during several days.  How to change the order of the day values within the group? Green (yesterday) should be the most left, followed by pink (the day before yesterday) and orange, ..... | reverse will change the order of the whole groups, that's not what I need.  All kind of time sorting like  | sort +"_time" or | sort -"_time" before and after  | chart ...  does not change anything.
After bumping in to the "Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64" problem, we too were f... See more...
After bumping in to the "Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=Splunk_SA_Scientific_Python_linux_x86_64" problem, we too were forced to double our max_content_length value on our search head cluster members. Upon closer look on "why is this app so big" I could see that several files unter bin/linux_x86_64/4_2_2/lib are actually duplicated. What we usually see as symlink to the library files, are current copies of the same file. After tweaking a little bit on those files, I guess we can reduce around 500MB of duplicated library files and bring it below the accepted standard of 2GB. Maybe someone overlooked that while compiling and packaging the app?
Please try to set "useRegexp: true" like below. It shouldnt be a wildcard problem. logsCollection: containers: multilineConfigs: - namespaceName: value: app1-de... See more...
Please try to set "useRegexp: true" like below. It shouldnt be a wildcard problem. logsCollection: containers: multilineConfigs: - namespaceName: value: app1-dev podName: value: app1.* useRegexp: true containerName: value: app1 firstEntryRegex: ^(?P<EventTime>\d+\-\w+\-\d+\s+\d+:\d+:\d+\.\d+\s+\w+) - namespaceName: value: app2-* useRegexp: true podName: value: .* useRegexp: true containerName: value: .* useRegexp: true firstEntryRegex: /^\d{1}\.\d{1}\.\d{1}\.\d{1}\.\d{1}/|^[^\s].*  
You can use below code to reload DS from splunklib.binding import HTTPError import splunk.clilib.cli_common as scc def trigger_reload_deploy_server(self, sc=None): """Triggers the 'reload d... See more...
You can use below code to reload DS from splunklib.binding import HTTPError import splunk.clilib.cli_common as scc def trigger_reload_deploy_server(self, sc=None): """Triggers the 'reload deploy-server' command via the Splunk REST API.""" try: # Trigger the 'reload deploy-server' command using the REST API log("INFO","Triggering 'reload deploy-server' command.",file_name="getdiaginfo") splunkd_uri = scc.getMgmtUri() session_key = self.service.token endpoint = splunkd_uri + "/services/deployment/server/config/_reload" headers = {"Authorization": f"Splunk {session_key}", "Content-Type": "application/json"} # Prepare the request body if not sc: body = '' else: body = urllib.parse.urlencode({"serverclass": sc}) update_response = requests.post(endpoint, headers=headers, data=body, verify=False) if update_response.status_code == 200: log("INFO", "--> 'reload deploy-server' command triggered successfully.", file_name="getdiaginfo") return True else: log("ERROR", f"Failed to trigger 'reload deploy-server'. Status: {update_response.status_code}, Response: {update_response.text}", file_name="getdiaginfo") return False except HTTPError as e: log("ERROR",f"Failed to trigger 'reload deploy-server': {e}",file_name="getdiaginfo") return False @Configuration() class Getdiaginfo(GeneratingCommand): server_list = Option(require=False) def generate(self): trigger_reload_deploy_server(self, <"serverclass-name">) dispatch(Getdiaginfo, sys.argv, sys.stdin, sys.stdout, __name__)
In theory and as a best practice it should but it is unlikely that all the DC will be on same version as DS unless all the best practice are followed. Either it will be on same or lower version than ... See more...
In theory and as a best practice it should but it is unlikely that all the DC will be on same version as DS unless all the best practice are followed. Either it will be on same or lower version than DS.
I disagree. The DS should be the highest version of any machine in the Splunk infrastructure (if Splunk Best Practice is adhered to). If the app is scanned and found to be compatible with that on the... See more...
I disagree. The DS should be the highest version of any machine in the Splunk infrastructure (if Splunk Best Practice is adhered to). If the app is scanned and found to be compatible with that on the DS, then, in theory, the app should be compatible with the version of Splunk on the DS (or later, if the tests are forward-looking)?
Can try this setting if it helps - syslogseverity severity OUTPUTNEW severity AS severity_auto severity_desc Please refer to https://community.splunk.com/t5/Splunk-Cloud-Platform/Why-can-I-not-expan... See more...
Can try this setting if it helps - syslogseverity severity OUTPUTNEW severity AS severity_auto severity_desc Please refer to https://community.splunk.com/t5/Splunk-Cloud-Platform/Why-can-I-not-expand-lookup-field-due-to-a-reference-cycle-in/m-p/563011/highlight/true#M1055
I agree with Giuseppe as the suggestion is good to start with. However, in MLTK you can use the outlier detection example shipped with the app. Can created a search split by source which will show th... See more...
I agree with Giuseppe as the suggestion is good to start with. However, in MLTK you can use the outlier detection example shipped with the app. Can created a search split by source which will show the sources responsible for the data growth as an outlier.