All Topics

Top

All Topics

Hello,  I need your help with timepicker values. I'd like to be able to keep some and hide others. I would like to hide all times linked to real time : "today,..." In the predefined periods : Ye... See more...
Hello,  I need your help with timepicker values. I'd like to be able to keep some and hide others. I would like to hide all times linked to real time : "today,..." In the predefined periods : Yesterday Since the start of the week Since the beginning of the working week From the beginning of the month Year to date Yesterday Last week The previous working week The previous month The previous year Last 7 days Last 30 days Other Anytime I would to have also : Period defined by a date Date and period Advanced  
I am using Windows 10 and the Splunk Universal Forwarder version 9.4.0. When I run certain Splunk commands from an Admin Command Prompt, the command window freezes with a blinking cursor and fails to... See more...
I am using Windows 10 and the Splunk Universal Forwarder version 9.4.0. When I run certain Splunk commands from an Admin Command Prompt, the command window freezes with a blinking cursor and fails to execute. I have to use Ctrl+C to stop the command. Some commands work without issues, such as: > splunk status – which confirms that Splunk is running, and > splunk version – which displays the version number. However, other commands, like: > splunk list forward-servers or > splunk display local-index, do not return any results. Instead, the cursor just blinks indefinitely. Has anyone experienced this issue before or found a solution?
Hi at all, I have a data structure like the following:     title1 title2 title3 title4 value     and I need to group by title1 and having title4 where value (numeric field) is max. How can I ... See more...
Hi at all, I have a data structure like the following:     title1 title2 title3 title4 value     and I need to group by title1 and having title4 where value (numeric field) is max. How can I use eval in stats to have this? something like this:     | stats values(eval(title4 where value is max)) AS title4 BY title1     How can I do it? Ciao. Giuseppe
Hello,  I am just trying to do a regex to split a single field into two new fields. The original field is: alert.alias = STORE_176_RSO_AP_176_10 I need to split this out to 2 new fields. First fi... See more...
Hello,  I am just trying to do a regex to split a single field into two new fields. The original field is: alert.alias = STORE_176_RSO_AP_176_10 I need to split this out to 2 new fields. First field = STORE_176_RSO Second field = AP_176_10 I am horrific at regex and am not sure how I can pull this off.  Any help would be awesome.   Thank you for your help, Tom
Hello, I am following document: https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/ConfigureandinstallcertificatesforLogObserver?ref=hk to configure and install certificates in Splunk Enter... See more...
Hello, I am following document: https://docs.splunk.com/Documentation/Splunk/9.4.0/Security/ConfigureandinstallcertificatesforLogObserver?ref=hk to configure and install certificates in Splunk Enterprise for Splunk Log Observer Connect but getting some error mentioned below. I have generated myFinalCert.pem as per the document mentioned above, below are the server.conf and web.conf configuration. # cat ../etc/system/local/server.conf [general] serverName = ip-xxxx.us-west-2.compute.internal pass4SymmKey = $7$IHXMpPIvtTGnxEusRYk62AjBIizAQosZq0YXtUg== [sslConfig] serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem requireClientCert = false sslPassword = $7$vboieDG2v4YFg8FbYxW8jDji6woyDylOKWLe8Ow== [lmpool:auto_generated_pool_download-trial] description = auto_generated_pool_download-trial peers = * quota = MAX stack_id = download-trial [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free # cat ../etc/system/local/web.conf [expose:tlPackage-scimGroup] methods = GET pattern = /identity/provisioning/v1/scim/v2/Groups/* [expose:tlPackage-scimGroups] methods = GET pattern = /identity/provisioning/v1/scim/v2/Groups [expose:tlPackage-scimUser] methods = GET,PUT,PATCH,DELETE pattern = /identity/provisioning/v1/scim/v2/Users/* [expose:tlPackage-scimUsers] methods = GET pattern = /identity/provisioning/v1/scim/v2/Users [settings] enableSplunkWebSSL = true serverCert = /opt/splunk/etc/auth/sloccerts/myFinalCert.pem # After making changes to server.conf, I am able to restart the splunkd service but after making changes to the web.conf, restarting the splunkd service gets stuck, below are logs related to it: # ./splunk restart splunkd is not running. [FAILED] Splunk> The IT Search Engine. Checking prerequisites... Checking http port [8000]: open Checking mgmt port [8089]: open Checking appserver port [127.0.0.1:8065]: open Checking kvstore port [8191]: open Checking configuration... Done. Checking critical directories... Done Checking indexes... Validated: _audit _configtracker _dsappevent _dsclient _dsphonehome _internal _introspection _metrics _metrics_rollup _telemetry _thefishbucket history main sim_metrics statsd_udp_8125_5_dec summary Done Checking filesystem compatibility... Done Checking conf files for problems... Done Checking default conf files for edits... Validating installed files against hashes from '/opt/splunk/splunk-9.3.2-d8bb32809498-linux-2.6-x86_64-manifest' All installed files intact. Done All preliminary checks passed. Starting splunk server daemon (splunkd)... PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped with the embedded Python interpreter; must be set to "1" for increased security Done [ OK ] Waiting for web server at https://127.0.0.1:8000 to be available...............................WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Please let me know if I am missing some thing. Thanks
Hi, The Mimecast App gets events for most of the activity that occurs in the solution but does not give the option to get archive events. Does anybody know if they plan to add that functionality soo... See more...
Hi, The Mimecast App gets events for most of the activity that occurs in the solution but does not give the option to get archive events. Does anybody know if they plan to add that functionality soon? Just in case so I do not have to develop that part on my own. I refer to those two API calls: https://integrations.mimecast.com/documentation/endpoint-reference/logs-and-statistics/get-archive-message-view-logs/ https://integrations.mimecast.com/documentation/endpoint-reference/logs-and-statistics/get-archive-search-logs/ The rest of the things are included in the current version 5.2.0: And no, the events generated when someone reads the content of an email are not stored with the Audit events. Thanks!
Hi I need help I have just updated my indexer cluster composed of 4 windows 2022 servers, to the new version of Splunk 9.4.0. As always I follow the update procedure, but this time one of my 4 serv... See more...
Hi I need help I have just updated my indexer cluster composed of 4 windows 2022 servers, to the new version of Splunk 9.4.0. As always I follow the update procedure, but this time one of my 4 servers refuses to update, it makes a rollback each time. I check the installations failed logs and noticed that the KVstrore was failing to update! Can anyone help me fix this problem? Thanks for your help.
Hello, We have a lookup csv file: 1 million records (data1); and a kvstore: 3 million records (data2). We need to compare a street address in data2 with a fuzzy match of the street address in data1 ... See more...
Hello, We have a lookup csv file: 1 million records (data1); and a kvstore: 3 million records (data2). We need to compare a street address in data2 with a fuzzy match of the street address in data1 - the bold red text below -returning the property owner. Ex" data2 street address:    123 main street  data1 street address:    123 main street apt 13 We ran a regular lookup command and this took well over 7 hours. We have tried creating a sub-address (data1a) removing the apt/unit numbers, but still a 7 hour search. Plus if there is more than one apt/unit at the address, there might be more than one property owner. This is why a fuzzy-type compare is what we are looking for. Hope my explanation is clear. Ask if not. Thanks and God bless, Genesius (Merry Christmas and Happy Holidays)
I have a client who wants to share the Readme file in their app with end users so that they can reference this in the UI. Seems reasonable and prevents them having to duplicate content into a view. O... See more...
I have a client who wants to share the Readme file in their app with end users so that they can reference this in the UI. Seems reasonable and prevents them having to duplicate content into a view. Otherwise the readme file is only available to admins who have CLI access. I have tried using the REST endpoint to locate the file, I have checked that the metadata allows read, it is just the path and actual capability I am unclear on. https://<splunk-instance>/en-GB/manager/<redacted>/apps/README.md Thanks  
Hi First of all, I'm a total beginner to Splunk. I just started my free trial of Splunk Cloud and want to install the UF on my MacBook. I don't know how to install the credential file, splunkclouduf... See more...
Hi First of all, I'm a total beginner to Splunk. I just started my free trial of Splunk Cloud and want to install the UF on my MacBook. I don't know how to install the credential file, splunkclouduf.spl. I have unpacked that file but in what directory should I move them to?  You can also see the directory of SplunkForwarder.          
Hello, I am configuring statsd to send custom metric from AWS EC2 instance on which splunk-otel-collector.service is running to Splunk Observability Cloud to monitor this custom metrics. I have fol... See more...
Hello, I am configuring statsd to send custom metric from AWS EC2 instance on which splunk-otel-collector.service is running to Splunk Observability Cloud to monitor this custom metrics. I have followed the steps mentioned in the https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver/statsdreceiver to setup statsd as receiver. receivers: statsd: endpoint: "localhost:8125" # default aggregation_interval: 60s # default enable_metric_type: false # default is_monotonic_counter: false # default timer_histogram_mapping: - statsd_type: "histogram" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "distribution" observer_type: "histogram" histogram: max_size: 50 - statsd_type: "timing" observer_type: "summary" I have a problem in setting service for this statsd as receiver, as per github doc below configuration is written for the exporters, but I am not sure how this will work. exporters: file: path: ./test.json service: pipelines: metrics: receivers: [statsd] exporters: [file]  I also tried setting exporters in service section "receivers: [hostmetrics, otlp, signalfx, statsd]" and "exporters: [signalfx]" in the agent_config.yaml file as mentioned below, when I restart the "systemctl restart splunk-otel-collector.service", splunk otel collector agent stop sending any metric to the  Splunk Observability Cloud and when I remove statsd (receivers: [hostmetrics, otlp, signalfx]) then splunk otel collector agent starts sending any metric to the  Splunk Observability Cloud. # pwd /etc/otel/collector # # ls agent_config.yaml config.d fluentd gateway_config.yaml splunk-otel-collector.conf splunk-otel-collector.conf.example splunk-support-bundle.sh # service: extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [otlphttp, signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway, signalfx] metrics: receivers: [hostmetrics, otlp, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway] What should be correct/supported exporter for the statsd as receiver? Thanks
Requirement: We need to monitor the Customer Decision Hub (CDH) portal, including Campaigns and Dataflows, using Real User Monitoring (RUM) in AppDynamics. Steps Taken: We injected the AppDynamic... See more...
Requirement: We need to monitor the Customer Decision Hub (CDH) portal, including Campaigns and Dataflows, using Real User Monitoring (RUM) in AppDynamics. Steps Taken: We injected the AppDynamics JavaScript agent code into the UserWorkForm HTML fragment rule. This is successfully capturing OOTB (Out-of-the-Box) screens but is not capturing Campaigns-related screens. Challenges: Pega operates as a Single Page Application (SPA), which complicates page load event tracking for Campaigns screens. Additionally, the CDH portal lacks a traditional front-end structure (HTML/CSS/JS), as Pega primarily serves server-generated content, which may restrict monitoring. Has anyone here successfully implemented such an integration? What are the best practices for passing this kind of contextual data from Pega to AppDynamics? Looking forward to your insights! Best regards,
Where should i get a trial copy for AppDynamics On-prem trial version for EUM and controller for evaluation purpose for few weeks.
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the j... See more...
Is it possible to use a python script to perform transforms during event indexing? My aim is to remove keys from json files to reduce volume. I'm thinking of using a python script that decodes the json, modifies the resulting dict and then encodes the result in a new json that will be indexed.
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for... See more...
Hi    trying to build a dashboard for user gateaccess, How to visualise this in a live data.   I am looking for some inbuilt visuaisations that helps for this, something like a missilemap but for user moving from one gate to other
Hi everyone, I've recently integrated Lansweeper (cloud) data into my Splunk Cloud instance, but over the past few days, I've been encountering some ingestion issues. I used the add-on: https://spl... See more...
Hi everyone, I've recently integrated Lansweeper (cloud) data into my Splunk Cloud instance, but over the past few days, I've been encountering some ingestion issues. I used the add-on: https://splunkbase.splunk.com/app/5418 Specifically, the data source intermittently stops sending data to Splunk without any clear pattern. Here's what I've checked so far: My configuration seems fine, and the polling interval is set to 300 seconds. The ingestion behavior appears inconsistent, as seen in the attached image. Based on the type of data Lansweeper generates, I wouldn't expect this inconsistency. While double-checking my configuration, I noticed an error, yet the source still manages to ingest data sporadically at certain hours.   Has anyone experienced similar issues or could provide guidance on how to debug this further? Thanks in advance for your help! LansweeperLansweeper Add-on for SplunkLansweeper Add On For Splunk 
Hi all, I am currently facing an issue in my Splunk environment. We need to forward data from Splunk to a third-party system, specifically Elasticsearch. For context, my setup consists of two index... See more...
Hi all, I am currently facing an issue in my Splunk environment. We need to forward data from Splunk to a third-party system, specifically Elasticsearch. For context, my setup consists of two indexers, one search head, and one deployment server. Could anyone share the best practices for achieving this? I’d appreciate any guidance or recommendations to ensure a smooth and efficient setup. Thanks in advance!
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it rep... See more...
Everytime we have to force replication on the SH nodes of a SH Cluster, the inputs.conf replicates and overwrites the hostname. Is there anyway to blacklist a .conf file by location to prevent it replicating when you do a forced resync of the SH nodes?
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have conne... See more...
I've been working on a search that I *finally* managed to get working that would look for events generated by a provided network switch and port name and then gives me all the devices that have connected to that specific port over a period of time. Fortunately, most of the device data is included alongside the events which contain the switch/port information.....that is....evenything except the hostname. Because of this, I've tried to use the join command to perform a second search through a second data set which contains the hostnames for all devices which have connected to the network and match those hostnames based on the shared MAC address field. The search works, and that's great, but it can only work over a time period of about a day or so before the subsearch breaks past the 50k event limit. Is these anyway I can get rid of the join command and maybe use the stats command instead? That's what simialr posts to this one seem to suggest, but I have trouble wrapping my head around how the stats command can be used to correlate data from two different events from different data sets.....in this case the dhcp_host_name getting matched to the corresponding device in my networking logs. I'll gladly take any assistance. Thank you.       index="indexA" log_type IN(Failed_Attempts, Passed_Authentications) IP_Address=* SwitchID=switch01 Port_Id=GigabitEthernet1/0/13 | rex field=message_text "\((?<src_mac>[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4}\.[A-Fa-f0-9]{4})\)" | eval src_mac=lower(replace(src_mac, "(\w{2})(\w{2})\.(\w{2})(\w{2})\.(\w{2})(\w{2})", "\1:\2:\3:\4:\5:\6")) | eval time=strftime(_time,"%Y-%m-%d %T") | join type=left left=L right=R max=0 where L.src_mac=R.src_mac L.IP_Address=R.src_ip [| search index="indexB" source="/var/logs/devices.log" | fields src_mac src_ip dhcp_host_name] | stats values(L.time) AS Time, count as "Count" by L.src_mac R.dhcp_host_name L.IP_Address L.SwitchID L.Port_Id  
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to ... See more...
I've piped a Splunk log query extract into a table showing disconnected and connected log entries sorted by time. NB row 1 is fine. Row 2 is fine because it connected within 120 sec. Now I want to show "disconnected" entries with no subsequent "connected" row say within a 120 sec time frame.  So, I want to pick up rows 4 and 5. Can someone advise on the Splunk query format for this? Table = Connect_Log Row Time Log text 1 7:00:00am connected 2 7:30:50am disconnected 3 7:31:30am connected 4 8:00:10am disconnected 5 8:10:30am disconnected