All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, can someone help me? I'm trying to call a webhook on AWX Tower (Ansible) using the Add-On Builder. This is my script but it doesn't work, but I don't get an error message either:   # encoding... See more...
Hi, can someone help me? I'm trying to call a webhook on AWX Tower (Ansible) using the Add-On Builder. This is my script but it doesn't work, but I don't get an error message either:   # encoding = utf-8 def process_event(helper, *args, **kwargs): """ # IMPORTANT # Do not remove the anchor macro:start and macro:end lines. # These lines are used to generate sample code. If they are # removed, the sample code will not be updated when configurations # are updated. [sample_code_macro:start] # The following example gets the alert action parameters and prints them to the log machine = helper.get_param("machine") helper.log_info("machine={}".format(machine)) # The following example adds two sample events ("hello", "world") # and writes them to Splunk # NOTE: Call helper.writeevents() only once after all events # have been added helper.addevent("hello", sourcetype="sample_sourcetype") helper.addevent("world", sourcetype="sample_sourcetype") helper.writeevents(index="summary", host="localhost", source="localhost") # The following example gets the events that trigger the alert events = helper.get_events() for event in events: helper.log_info("event={}".format(event)) # helper.settings is a dict that includes environment configuration # Example usage: helper.settings["server_uri"] helper.log_info("server_uri={}".format(helper.settings["server_uri"])) [sample_code_macro:end] """ helper.log_info("Alert action awx_webhooks started.") # TODO: Implement your alert action logic here import requests url = 'https://<AWX-URL>/api/v2/job_templates/272/gitlab/' headers = {'Authorization': 'X-Gitlab-Token: <MYTOKEN>'} response = requests.post(url, headers=headers, verify=False) print(response.status_code) print(response.text)  
I just want to pose a quick question about the Microsoft API URLs that are used in the add-on.  At what point will the add-on be updated to reflect the new URL changes?  I had a conversation with a M... See more...
I just want to pose a quick question about the Microsoft API URLs that are used in the add-on.  At what point will the add-on be updated to reflect the new URL changes?  I had a conversation with a Microsoft engineer, and he mentioned that the following URLs may not work past Dec 31 2024:    API_ADVANCED_HUNTING = "/api/advancedhunting/run" API_ALERTS = "/api/alerts" API_INCIDENTS = "/api/incidents This link shows the difference between some of the old vs new urls :  Use the Microsoft Graph security API - Microsoft Graph v1.0 | Microsoft Learn I know it's a while off.  However, it comes quick at times.  Just trying to understand the process so I can stay ahead of it.  Also, I have seen add-ons that have the option for legacy inputs and also for current.  It would be great to have an option like that before the URL switch for this add-on.
Hi, We currently have events where identifying the app that makes the event depends multiple fields, as well as substrings in within those fields.  For example, app 1 is identified by SourceName=... See more...
Hi, We currently have events where identifying the app that makes the event depends multiple fields, as well as substrings in within those fields.  For example, app 1 is identified by SourceName=Foo "bar(" app 2 is identified by SourceName=Foo "quill(" app 3 is identified by SourceName=Foo app 4 is identified by source=abcde app 5 is identified by sourcetype=windows eventcode=11111 I would like to count the number of errors per app, but not having luck yet.  I've tried regexes & an eval case match pattern, & I can't seem to google the correct words to find a similar scenario in others' posts. Please help.  Thanks, Orion
Hi, I need to know the steps and understnading on how to configure LDAP authentication via GUI which is available here: Settings- Authentication methods- LDAP If anyone can share the understanding ... See more...
Hi, I need to know the steps and understnading on how to configure LDAP authentication via GUI which is available here: Settings- Authentication methods- LDAP If anyone can share the understanding and exact steps, that will be helpful. Thanks
Hi Team, I want to get DB top 10 query wait states in AppD dashboard. Kindly suggest.
Hi ! I am facing an issue adding a new field in the ES identity kv store. After adding a new field automatic lookup doesn't work and never returns my new field in my events, but I can manually retri... See more...
Hi ! I am facing an issue adding a new field in the ES identity kv store. After adding a new field automatic lookup doesn't work and never returns my new field in my events, but I can manually retrieve it with this query :       | inputlookup ES_identity_kvstore       while that one :       index=my_index | lookup ES_identity_kvstore...       throws me an error :       [comma separated list of my indexers] phase_0 - Streamed search execute failed because: Error in 'lookup' command: Cannot find the destination field 'my_new_field' in the lookup table 'ES_identity_kvstore'..       still, with this following query forcing the SH to run the lookup I can retrieve my new field :       index=my_index | lookup local=true ES_identity_kvstore...       collections.conf (with replicate=true) and props.conf are correctly updated on the SH so I think I am maybe missing something on my indexers configuration but can not figure out what it is...  Do you have any idea ? Thanks !
Which specific file or folder inside Splunk root folder we can map with IIS which can pick the web files or binaries/distributable that can be rendered on the browser?
When trying to make a connection with the dbconnect app using the "MS-SQL Server Using MS Generic Driver" drive, it is giving an error and requesting port 6666, but in the connection  string I use po... See more...
When trying to make a connection with the dbconnect app using the "MS-SQL Server Using MS Generic Driver" drive, it is giving an error and requesting port 6666, but in the connection  string I use port 1433. Does anyone know why this change is happening and how? How do I solve this? Note: already has a firewall rule created for port 1433 Connection String jdbc:sqlserver://myhost.database.windows.net:1433;databaseName=mydb;selectMethod=cursor;encrypt=false Error: Connection failure reason: The TCP/IP connection to the host myhost.database.windows.net, port 6666 has failed. Error: "connect timed out. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.". Diagnosis: Either the database is unavailable, or the specified host/port is incorrect, or you are blocked by a firewall Troubleshooting recommendation: Make sure the database is running on the server and you or the database are not blocked by a firewall.  
I have a source file with comma separated fields, I have to create a table where I need combine and show statistics of different products. Data file fields - TeamName,EmploymentType,Skills,TrainingC... See more...
I have a source file with comma separated fields, I have to create a table where I need combine and show statistics of different products. Data file fields - TeamName,EmploymentType,Skills,TrainingCompleted Source File data: TeamA,Contract,Java,Yes TeamA,Contract,DotNet,No TeamA,Contract,C++,Yes TeamA,Contract,ReactJS,No TeamB,Permanent,Java,Yes TeamB,Permanent,DotNet,No TeamB,Permanent,C++,Yes TeamB,Permanent,ReactJS,No TeamC,Contract,Java,Yes TeamC,Contract,DotNet,No TeamC,Contract,C++,Yes TeamC,Contract,ReactJS,No TeamD,Permanent,Java,Yes TeamD,Permanent,DotNet,No TeamD,Permanent,C,Yes TeamD,Permanent,ReactJS,No TeamE,Contract,Java,Yes TeamE,Contract,DotNet,No TeamE,Contract,Java,Yes Now the requirement is to create a table view of source file with below columns: TeamName EmploymentType Skills TrainingCompleted Team Appearance Training Completion% TeamA Contract Java,DotNet,ReactJS,C++ 2 4 50% TeamB Permanent Java,DotNet,ReactJS,C++ 2 4 50% TeamC Contract Java,DotNet,ReactJS,C++ 2 4 50% TeamD Permanent Java,DotNet,ReactJS,C 2 4 50% TeamE Contract Java,Dotnet 2 3 67%   Please give me the exact query. I am beginner in Splunk. 
Hi Team Getting this error message frequently in internal logs of Splunk. Error in 'where' command: The expression is malformed. An unexpected character is reached at '*) OR match(indicator, *_... See more...
Hi Team Getting this error message frequently in internal logs of Splunk. Error in 'where' command: The expression is malformed. An unexpected character is reached at '*) OR match(indicator, *_ip) OR match(indicator, *_host)) Any hints will be appreciated. Thanks in advance  
Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > ... See more...
Hi at all, I found that the information in my Monitoring Console (Splunk version 9.1.1) about Replication Factor is wrong, is there anyone that experienced the same thing: in [Monitoring Console > Overview of Splunk Enterprise 9.1.1] is displayed Replication Factor = 3 but I configured Replication Factor =2 (it's a multisite, so origin=1, total=2). Is it maybe the Search Head Cluster Replication Factor (that's 3) or simply a displ? Thank you for your advice. Ciao. Giuseppe
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects... See more...
Hi Team, We are planning to integrate Splunk with the ticketing system BMC Helix. So, we would like to know if it's possible to raise a ticket in the BMC Helix automatically if the knowledge objects are triggered? If yes what is the way to do that.   Thanks in Advance Siva.  
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does n... See more...
I have a log feed which was configured by a previous employee. Documentation does not exist, of course... The feed stopped once we migrated indexers. I checked the deployment server and there does not seem to be any apps on this server where the feed exists which ingest this feed. Then, we added in the old indexer to the architecture and the feed started working again! When I check these events in Search & Reporting, I can see the feed is only coming in via this legacy indexer by checking the splunk_server field. I logged into the legacy indexer via CLI and used btool on inputs for the index, source and sourcetypes, no matches. Also struggling to find anything useful in the _internal events via search head GUI. Both indexers are in a cluster and so the config should be identical, but the events only come in via the legacy indexer. How can I find how this feed is configured?
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Co... See more...
I am trying to integrate this solution into Splunk but I am finding problems. The most relevant as far is the number of retrieved events. I use the official event collector app Radware CWAF Event Collector | Splunkbase It works via API user and I found that the number of events in Splunk doesn't match with the events in cloud console. After opening a support ticket with Radware they told me that the problem is the "pulling rate" for API configuration in my Splunk. I have been trying to find how to configure this "pulling rate" in Splunk but I found nothing. Do you know how to solve this parameter or how do you solve this integration? This is exactly what they told me: Our cloudops team checked to see if there are any differences between the CWAF and the logs that are sent to your SIEM. They found that there is a queue of logs that are waiting to be pulled by your SIEM. Therefore, we do not have any evidence that the issue is with the SIEM infrastructure. For example, if we send 10 events per minute and the SIEM is pulling 5 per minute, this will create a queue of logs. Unfortunately, we cannot support customer-side configuration. It might be more helpful to consult with the support team for the SIEM you are using, as the interval might be the "Pulling Rate."  
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have sp... See more...
if we configure a fast volume for hot/warm and slower spindles for cold and set maxVolumeDataSizeMB to enforce sizes. can you see any situation where cold would file but hot/warm would still have space?
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribu... See more...
I have a client creating a new system that will have 2 sites with rf/sf per site being 1 and the total rf/sf being 2. Each site will have 3+ indexers. My question is how this effects bucket distribution over the indexers if you leave maxbucket as the default of 3 and is there any performance implications They're going rf/sf as 1 due to disk costs. Thanks in advance
I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2,... See more...
I am basically faced with this problem:     | makeresults count=3 | streamstats count | eval a.1 = case(count=1, 1, count=2, null(), count=3, "null") | eval a.2 = case(count=1, "null", count=2, 2, count=3, null()) | eval a3 = case(count=1, null(), count=2, "null", count=3, 3) | table a* | foreach mode=multifield * [eval <<FIELD>>=if(<<FIELD>>="null",null(),'<<FIELD>>')]     I have fields that contain a `.`. This screws up the `foreach` command. Is there a way to work around this? I have tried using `"<<FIELD>>"` but to no avail.  I think it would work to rename all "bad" names, loop trough them and rename them back, but if possible I would like to avoid doing this.      
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David ... See more...
Hi, Not sure how to fix it. Hope someone can give me a hint.  The code looks like index=asa host=1.2.3.4 src_sg_info=* | timchart span=10m dc(src_sg_info) by src_sg_info | rename user1 as "David E"   This splunk code will give a list with active/logged on VPN user.  So far so good. So my question is following: howto  include empty src_sg_info into the same timechart and mark it as "No active VPN user"
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אות... See more...
I have a  simple report that that has Hebrew in it. Exporting to CSV works as it should and I see the Hebrew in it, but exporting to PDF shows nothing.   | makeresults | eval title = "לא רואים אותי"   Can someone help? Thanks  
Hi, I'm trying Splunk SOAR Community Edition, and I'm having an issue with the Elasticsearch app. I'm attempting to configure the asset with my Elasticsearch instance. The test connectivity is good... See more...
Hi, I'm trying Splunk SOAR Community Edition, and I'm having an issue with the Elasticsearch app. I'm attempting to configure the asset with my Elasticsearch instance. The test connectivity is good, but I can't poll incidents with "poll now." I encounter this type of error: Starting ingestion... If an ingestion is already in progress, this request will be queued and completed after that request completes. App 'Elasticsearch' started successfully (id: 1699519715123) on asset: 'elastic'(id: 4) Loaded action execution configuration Quering data for soar index Successfully added containers: 0, Successfully added artifacts: 0 1 action failed Unable to load query json. Error: Error Message: Expecting value: line 1 column 1 (char 0) However, when I use an action in a playbook with the command "run query," I can see data. Has anyone ever encountered this error ?