All Topics

Top

All Topics

Hello, Why does changing addtime=false on scheduled summary index - advanced edit has no effect? Thank you for your help. Advanced Edit:  After the scheduled summary index ran on a sche... See more...
Hello, Why does changing addtime=false on scheduled summary index - advanced edit has no effect? Thank you for your help. Advanced Edit:  After the scheduled summary index ran on a scheduled time,  the latest search on "view recent"  still showed "addtime=t", instead of "addtime=f" addtime=false worked if I ran it manually   
Any reason why this can't be visualized in a geo cluster map? source="udp:514" index="syslog" NOT src_ip IN (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 17.0.0.0/8) action=DROP src_ip!="162.159.192.9... See more...
Any reason why this can't be visualized in a geo cluster map? source="udp:514" index="syslog" NOT src_ip IN (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 17.0.0.0/8) action=DROP src_ip!="162.159.192.9" | iplocation src_ip | geostats count by country      
i have splunk index configured in my openshift cluster as a configmap, now if i change the index on the cluster still my container logs are moving to the old index. is there something i am missing? 
Hello all - Trying to get Azure Event Hub data to flow into Splunk. Having issues configuring it with the add-on for Microsoft Cloud Services. I have configured an app in Azure that has Reader & Ev... See more...
Hello all - Trying to get Azure Event Hub data to flow into Splunk. Having issues configuring it with the add-on for Microsoft Cloud Services. I have configured an app in Azure that has Reader & Event Hub Receiver roles. Event Hub has been configured it receive various audit information. I am trying to configure the input. But receive error message in splunk_ta_microsoft_cloudservices_mscs_azure_event_hub_XYZ.log     Error - 2024-03-08 16:20:31,313 level=ERROR pid=22008 tid=MainThread logger=modular_inputs.mscs_azure_event_hub pos=mscs_azure_event_hub.py:run:939 | datainput="PFG-AzureEventHub1" start_time=1709914805 | message="Error occurred while connecting to eventhub: CBS Token authentication failed. Status code: None Error: client-error CBS Token authentication failed. Status code: None"     I then tried to input the Connection string-primary key in the FQDN space, but receive the below error message. This is occurring because it is trying to create a ckpt file, but the file path is too long and it contains invalid characters.     2024-03-08 14:41:32,112 level=ERROR pid=34216 tid=MainThread logger=modular_inputs.mscs_azure_event_hub pos=utils.py:wrapper:72 | datainput="PFG-AzureEventHub1" start_time=1709908886 | message="Data input was interrupted by an unhandled exception." Traceback (most recent call last): File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\splunksdc\utils.py", line 70, in wrapper return func(*args, **kwargs) File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 933, in run consumer = self._create_event_hub_consumer(workspace, config, credential, proxy) File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 851, in _create_event_hub_consumer args.consumer_group, File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 238, in open checkpoint = SharedLocalCheckpoint(fullname) File "L:\Program Files\Splunk\etc\apps\Splunk_TA_microsoft-cloudservices\lib\modular_inputs\mscs_azure_event_hub.py", line 103, in __init__ self._fd = os.open(fullname, os.O_RDWR | os.O_CREAT) FileNotFoundError: [Errno 2] No such file or directory: 'L:\\Program Files\\Splunk\\var\\lib\\splunk\\modinputs\\mscs_azure_event_hub\\Endpoint=sb://REDACTED.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=REDACTED-insights-activity-logs-$Default.v1.ckpt'      Here is my inputs.conf file for the add-on     [mscs_azure_event_hub://PFG-AzureEventHub1] account = AzureActivity consumer_group = $Default event_hub_name = insights-activity-logs event_hub_namespace = REDACTED.servicebus.windows.net index = azure-activity interval = 300 max_batch_size = 300 max_wait_time = 10 sourcetype = mscs:azure:eventhub use_amqp_over_websocket = 1      I have been stuck on this for the past couple of days. Any advice would be greatly appreciated!
Recently our TA was rejected for Splunk Cloud compatibility due to a configuration option that would allow our customers to disable SSL verification so that they can make the REST API calls to a serv... See more...
Recently our TA was rejected for Splunk Cloud compatibility due to a configuration option that would allow our customers to disable SSL verification so that they can make the REST API calls to a server that has a self-signed TLS certificate. The TA is using Python code for the inputs, and one of the configuration options when setting up the input was to Enable or Disable SSL Verification.  Customers using servers with self-signed certificates could opt to disable verification.  This would set the verify parameter to the helper.send_http_request to False. This option passed Cloud compatibility until recently when we were notified that external network calls must be made securely and so our TA no longer qualified for Cloud compatibility with the option to set verify=False. Has anyone else ran into this issue and is there a solution other than forcing customers to purchase TLS certificates from a trusted CA? I did see there is an option to the helper.send_http_request call to specify the CA bundle, but we do not have any control over what CA is used to generate the self-signed certificate so there is no way to include a bundle in the TA. Any suggestions are welcome.  
Hello, How to modify _time when running summary index on a scheduled search? Please suggest. I appreciate your help. Thank you When running summary index on a scheduled search, by default, _time... See more...
Hello, How to modify _time when running summary index on a scheduled search? Please suggest. I appreciate your help. Thank you When running summary index on a scheduled search, by default, _time was set to info_min_time, (start time of a search duration), instead of search_now (time when the search run) So, if at this current time I collect the summary index in the last 30 day , the _time will be set to the last 30 days , instead of current time. The problem is if I run a search in the past 24 hours, the data won't show up because the _time is dated the last 30 days, so I had to search in the past 30 days
IHAC that is trying to ingest logs from their self-hosted Trellix instance.   When I try to add an account, the URL field only lists: Global Frankfort India Singapore Sydney There i... See more...
IHAC that is trying to ingest logs from their self-hosted Trellix instance.   When I try to add an account, the URL field only lists: Global Frankfort India Singapore Sydney There is no other input field to specify an actual FQDN/IP.  Am I missing something, or is this feature not present?
Hi Team, Hi Splunk Team, could you guide me through the process on how to consolidate Thousand Eyes into Splunk to centralize alerts on the dashboard? Please, Share me the each and every steps to... See more...
Hi Team, Hi Splunk Team, could you guide me through the process on how to consolidate Thousand Eyes into Splunk to centralize alerts on the dashboard? Please, Share me the each and every steps to process on how to consolidate TE into Splunk. Thanks
Hi Team, I'm currently using Version 8.2.10 and encountered an issue today. It seems that my admin account has disappeared from USERS AND AUTHENTICATION -> Users. I'm perplexed by this occurrence an... See more...
Hi Team, I'm currently using Version 8.2.10 and encountered an issue today. It seems that my admin account has disappeared from USERS AND AUTHENTICATION -> Users. I'm perplexed by this occurrence and would appreciate any insights into why this might have happened. Additionally, I'm seeking guidance on how to prevent similar incidents in the future.
When the index pipeline begins backing up at any stage, which resources are responsible for the bottleneck. Obviously, once backed up the problem will overflow into other areas but is there a "rule" ... See more...
When the index pipeline begins backing up at any stage, which resources are responsible for the bottleneck. Obviously, once backed up the problem will overflow into other areas but is there a "rule" or anything that says if the backup is at the Parsing Pipeline then the storage IO is too low,  Merging Pipeline then the CPU is too low,  Typing Pipeline the memory is too low, or Index Pipeline it's network bandwidth, etc. I am specifically looking for info regarding a Heavy Forwarder but any help would be appreciated. *It's not as bad as the picture makes it seem, just posting for visual*  
When adding a new cluster manager to redundant cluster manager cluster do I need to manually deploy the current manager-apps?
Hello, I need help to assign text box value to radio button but it's not working.     <form> <label>assign text box value to Radio button</label> <fieldset submitButton="false"> <input ... See more...
Hello, I need help to assign text box value to radio button but it's not working.     <form> <label>assign text box value to Radio button</label> <fieldset submitButton="false"> <input type="radio" token="tokradio" searchWhenChanged="true"> <label>field1</label> <choice value="category=$toktext$">Group</choice> <default>category=$toktext$</default> <initialValue>category=$toktext$</initialValue> </input> <input type="text" token="toktext" searchWhenChanged="false"> <label>field2</label> </input> </fieldset> <row> <panel> <title>tokradio=$tokradio$</title> <table> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>   Thanks in advance. @bowesmana  @tscroggins @gcusello @yuanliu @ITWhisperer 
Hi   I'm facing an issue with creating a support ticket.   I'm on enterprise version for a company that has support account. I'm new in the security team, i ve tried to contact support via suppor... See more...
Hi   I'm facing an issue with creating a support ticket.   I'm on enterprise version for a company that has support account. I'm new in the security team, i ve tried to contact support via support form (4 times) but no answer. I've tried to call support, but they has answered me that i need to ask to my manager to add me via admin portal or contact support to help with. My manager isn't able to do that. This is  really blocking me, anyone has an advice?   Thx
Hi Team,  I have created a dashboard with the below mentioned query and the output would be in Column chart format with a timer. And the output would display the Top 10 Event Codes with Count when w... See more...
Hi Team,  I have created a dashboard with the below mentioned query and the output would be in Column chart format with a timer. And the output would display the Top 10 Event Codes with Count when we choose the time and click submit. index=windows host=* source=WinEventLog:System EventCode=* Type=Error OR Type=Critical | stats count by EventCode Type |sort -count | head 10 So post the results are displayed in the Column chart then my requirement is that if we click any one of the EventCode  consider as an example of 4628 from the Top 10 details in the Column Chart then it should  navigate to a new panel or a window showing up with the Top 10 host, source, Message, EventCode along with Count for Event Code 4628. So something like that we want to get the results displayed.  But this should happen if we click the EventCode from the Column chart of the existing dashboard. Example: index=windows host=* source=WinEventLog:System EventCode=4628 Type=Error OR Type=Critical | stats count by host source Message EventCode |sort -count | head 10   So kindly let me know how to achieve this requirement in a dashboard format.  
Hi there. A simple question, it's not for a real usage, just a curiosity Does UF block inputs for system paths by default? An example, teorically an inputs like this   [monitor:///...] whitel... See more...
Hi there. A simple question, it's not for a real usage, just a curiosity Does UF block inputs for system paths by default? An example, teorically an inputs like this   [monitor:///...] whitelist=. index=root sourcetype=root_all disabled=0   Should ingest all non binary files under the "/" paths, including subdirs. At the real fact, i find only the "/boot" path ingested. Is this a security feature to exclude system paths "/" from been ingested? Thanks
Splunk offline --enforce-count or data rebalance which one is better in case of migrating to new hardware and do i have to add peer to manual detention in indexer cluster before running a data rebala... See more...
Splunk offline --enforce-count or data rebalance which one is better in case of migrating to new hardware and do i have to add peer to manual detention in indexer cluster before running a data rebalance or splunkoffline?  
Register here. This thread is for the Community Office Hours session with Neal Iyer, Sr. Principal Product Manager, on automated threat analysis with Splunk Attack Analyzer on Wed, April 17, 2024 at ... See more...
Register here. This thread is for the Community Office Hours session with Neal Iyer, Sr. Principal Product Manager, on automated threat analysis with Splunk Attack Analyzer on Wed, April 17, 2024 at 1pm PT / 4pm ET.   Join us for an office hours session to ask questions about how automated threat analysis can enhance your existing security workflows, including: Practical applications and common use cases How Splunk Attack Analyzer integrates with other Splunk security solutions  Anything else you'd like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Hi,   I got one weird problem that when I run query in splunk, there're events found, but the Event log field is always blank.   However, the problem will be fixed by below steps: in the Splunk... See more...
Hi,   I got one weird problem that when I run query in splunk, there're events found, but the Event log field is always blank.   However, the problem will be fixed by below steps: in the Splunk search result, go to All Fields at left rail change Coverage: 1% more to All fields click Deselect All click Select All Within Filter Then the problem is fixed, I can see the Event logs. Even I change from All fields back to Coverage: 1% more, I can still see Events logs. But after I close the browser tab and go to Splunk and search again, the problem still exists. So the problem is, why Coverage: 1% more will have problem to me for the first time query?   Anyone has idea about this? Thanks.  
Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_h... See more...
Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_hec exporter. Howerver the receiving following error. 2024-03-07 12:56:27 2024-03-07T17:56:27.001Z info exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "Post  https://splunkcnqa-hf-east.com.cn:8088/services/collector/event\": dial tcp 42.159.148.223:8088: i/o timeout (Client.Timeout exceeded while awaiting headers)", "interval": "2.984810083s"}   using the below config ============================ extensions: memory_ballast: size_mib: 500 receivers: filelog: include: - /var/log/*.log encoding: utf-8 fingerprint_size: 1kb force_flush_period: "0" include_file_name: false include_file_path: true max_concurrent_files: 100 max_log_size: 1MiB operators: - id: parser-docker timestamp: layout: '%Y-%m-%dT%H:%M:%S.%LZ' parse_from: attributes.time type: json_parser poll_interval: 200ms start_at: beginning processors: batch: exporters: splunk_hec: token: "XXXXXX" endpoint: "https://splunkcnqa-hf-east.com.cn:8088/services/collector/event" source: "otel" sourcetype: "otel" index: "index_preprod" profiling_data_enabled: true tls: insecure_skip_verify: true service: pipelines: logs: receivers: [filelog] processors: [batch] exporters: [splunk_hec]
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone... See more...
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone confirm it?