All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates... See more...
For simplicity assume I have the following saved as a report (testReport): index=testindex host=testhost earliest=-90m latest=now I need to create 2 bar graphs in the same chart comparing two dates.  For starters I need to be able to run the above with a time I specify overrriding the time range above. | savedsearch "testReport" earliest="12/08/2024:00:00:00" latest="12/08/2024:23:59:00" I have seen a few similar question here but I don't think it has  a working solution.     
https://docs.splunk.com/Documentation/Splunk/9.4.0/ReleaseNotes/MeetSplunk#What.27s_New_in_9.4 Why New Splunk TcpOutput Persistent Queue? Scheduled no connectivity for extended period but ne... See more...
https://docs.splunk.com/Documentation/Splunk/9.4.0/ReleaseNotes/MeetSplunk#What.27s_New_in_9.4 Why New Splunk TcpOutput Persistent Queue? Scheduled no connectivity for extended period but need to resume data transmission once connection is back up. Assuming there is enough storage, tcpout output queue can persist all events to disk instead of buying expensive third party subscription(unsupported) to persist  data to SQS/S3. If there are two tcpout output destinations and one is down for extended period. Down destination has large enough PQ to persist data, then second destination is  not blocked. Second destination will block only once PQ of down destination is full.  Don't have to  pay for  third party SQS & S3 puts. Third party/ external S3 persistent queue introduces permanent additional latency( due to detour to external SQS/S3 queue). There are chances of loss of events( events getting in to SQS/S3 DLQ). Third party/ external SQS/S3 persistent queuing requires batching events, which adds additional latency in order to reduce SQS/S3 puts cost. Unwanted additional network bandwidth usage incurred due to uploading all data to SQS/S3 and then downloading . Third party imposes upload payload size limits. Monitored corporate laptops are off network, not connected  to internet or not connected to VPN for extended period of time. Later laptops might get switched off but events should be persisted and forwarded as and when laptop connects to network. Sensitive data should stay/persisted within network. On demand persistent queuing on forwarding tier when Indexer Clustering is down. On demand persistent queuing on forwarding tier when Indexer Clustering indexing is slow due to high system load. On demand persistent queuing on forwarding tier when Indexer Clustering is in rolling restart. On demand persistent queuing on forwarding tier during Indexer Clustering upgrade. Don't have to use decade old S2S protocol version as suggested by some third party vendors ( you all know enableOldS2SProtocol=true in outputs.conf) How to enable? Just set  persistentQueueSize as per outputs.conf [tcpout:splunk-group1] persistentQueueSize=1TB [tcpout:splunk-group2] persistentQueueSize=2TB Note: Sizing guide coming soon.
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. The data comes to my indexer ... See more...
Hi, I'm using the Journald input in univarsal forwarder to collect logs form journald: https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/CollecteventsfromJournalD. The data comes to my indexer as expected. One of the fields that I send with the logs is the TRANSPORT field. When I search the logs I can see that TRANSPORT event metadata is present as expected.   I would like to set the logs sourcetype dynamically based on the value of the TRANSPORT field. Here is the props.conf and transforms.conf that I'm trying to use   props.conf: [default] TRANSFORMS-change_sourcetype = set_new_sourcetype   transforms.conf [set_new_sourcetype] REGEX = TRANSPORT=([^\s]+) FORMAT = sourcetype::test DEST_KEY = MetaData:Sourcetype   Unfortunately the above seems to have no impact on the logs. I think that the problem lies in the REGEX field. When I change it to REGEX = .* , all of the events have the sourcetype set to test as expected. Why can't I use the TRANSPORT event in the REGEX?
Hello, in case you can't use SSL certificate, you may modify cybereason python script.
Hello everybody, I am facing some challenges with some custom log file containing bits of xml surrounded by some sort of headers... The file looks something like this:   [1][DATA]BEGIN --- - 06:... See more...
Hello everybody, I am facing some challenges with some custom log file containing bits of xml surrounded by some sort of headers... The file looks something like this:   [1][DATA]BEGIN --- - 06:03:09[012] <xml> <tag1>value</tag1> <nestedTag> <tag2>another value</tag2> </nestedTag> </xml> [1][DATA]END --- - 06:03:09[012] [1][DATA]BEGIN --- - 07:03:09[123] <xml> <tag1>some stuff</tag1> <nestedTag> <tag2>other stuff</tag2> </nestedTag> </xml> [1][DATA]END --- - 07:03:09[123] [1][DATA]BEGIN --- - 08:03:09[456] <xml> <tag1>some more data</tag1> <nestedTag> <tag2>fooband a bit more</tag2> </nestedTag> </xml> [1][DATA]END --- - 08:03:09[456]    It is worth noting that the xml parts can be very large. I would like to take advantage of Splunk's automatic xml parsing as it is not realistic to do it manually in this case, but the square bracket lines around each xml block seem to prevent the xml parser to do its job and I get no field extraction. So, what I would like to do is: Converting the "data begin" line with the square brackets, before each xml block, into an xml formatted line, so that I can use it for the time of the event (the date itself is encoded in the filename...) and let Splunk parse the rest of the xml data automatically Stripping out the lines with the "data end" bit after each block of xml. These are not useful as they provide the same time than the "data begin" line. Aggregating the xml lines of the same block into one event What I have tried with props.conf and transforms.conf: props.conf   [my_sourcetype] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = KV_MODE = xml LINE_BREAKER = \]([\r\n]+)\[1\]\[DATA\]BEGIN NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = true TRANSFORMS-full=my_transform # only with transforms.conf v1 TRANSFORMS-begin=begin # only with transforms.conf v2 TRANSFORMS-end=end # only with transforms.conf v2   transforms.conf (version 1):   [my_transform] REGEX = (?m)\[1\]\[DATA\]BEGIN --- - (\d{2}:\d{2}:\d{2}).*([\r\n]+)([^\[]*)\[1\]\[DATA\]END.*$[\r\n]* FORMAT = <time>$1</time>$2$3 WRITE_META = true DEST_KEY = _raw    transforms.conf (version 2):   [begin] REGEX = (?m)^\[1\]\[DATA\]BEGIN --- - (\d{2}:\d{2}:\d{2}).*$ FORMAT = <time>$1</time> WRITE_META = true DEST_KEY = _raw [end] REGEX = (?m)^\[1\]\[DATA\]END.*$ DEST_KEY = queue FORMAT = nullQueue     With the various combinations listed here, I got all sorts of results: well separated events but with square brackets left over one big block with all events aggregated together and no override of the square bracket lines one event with the begin square bracket line truncated at 10k characters 4 events with one "time" xml tag but nothing else... Could anybody help me out with this use case? Many thanks, Alex
Hi All,   I've installed the Splunk Add-on for Unix and Linux in both Splunk Enterprise as well as my forwarder which is running 9.3.2  However I keep running into this error below: 12-19-2024 15:... See more...
Hi All,   I've installed the Splunk Add-on for Unix and Linux in both Splunk Enterprise as well as my forwarder which is running 9.3.2  However I keep running into this error below: 12-19-2024 15:54:30.303 +0000 ERROR ExecProcessor [1376795 ExecProcessor] - message from "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/vmstat_metric.sh" /opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/hardware.sh: line 62: /opt/splunkforwarder/var/run/splunk/tmp/unix_hardware_error_tmpfile: No such file or directory   The above is coming from the splunkd.log after I have stopped and restarted the SplunkForwarder.service.  I am very new to Splunk and do not posses any certifications.  My company has tasked me with learning and configuring Splunk and I am enjoying it except I am unable to get this data sent to my indexer so that I can see the data in Search and Reporting.   These are the steps taken so far: Installed Splunk Add-on for Unix and Linux on my enterprise UI machine installed Splunk Add-on for Unix and Linux on my unvf As the local directory was not created at "/opt/splunkforwarder/etc/apps/Splunk_TA_nix/" I created it and copied the inputs.conf from default to local and then allowed the scripts I wanted. Made sure the Splunk user was owner and had the privileges needed to the local directory stopped the splunk service and restarted it. Ran -  cat /opt/splunkforwarder/var/log/splunk/splunkd.log | grep ERROR Almost every Error is "unix_hardware_error_tmpfile: No such file or directory" if I create the tmpfile it disappears and is not recreated I'm sure there are many other things I didnt mention because I honestly dont remember because I have been trying to figure this issue out since yesterday and am not getting anywhere.  PLEASE HELP!
I am trying to set up a synthetic browser test that makes use of variables. I can't seem to find information about the usage of variables in Browser Tests other than this. So far I tried to: Access... See more...
I am trying to set up a synthetic browser test that makes use of variables. I can't seem to find information about the usage of variables in Browser Tests other than this. So far I tried to: Access Global variable as a url in a "go to URL" step -> which leads to a "https:// is not a valid URL" although I can see the variable in the "variables" tab Access Global/Predefined variables in "execute Javascript" steps -> undefined Set variable via "Save return value from javascript" to variable var and try to reuse it in both -> "assert Text present" with {{custom.$var}} -> undefined            -> "execute Javascript" with custom.var -> undefined Assign var/const in "execute Javascript" step and reference it in consecutive "execute Javascript" steps -> undefined Tried to access built-in variables: in "execute Javascript" tests (except those that are only available in API-Tests) -> undefined   Which raises the following questions for me: What is the idiomatic way to use variables in Synthetic Browser Tests? So far it seems to me that they can only be used to fill a field as its the only Action mentioned here and no other action I tried seems to support variables which would quite honestly be really dissapointing. Am I overlooking any documentation? Which kind of actions support the use of variables created by other steps?   Thank you
Is there a way to capture the Thread Name for Business Transactions in Java? I see RequestGUID and URL are captured when looking through the UI. Thanks.
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to u... See more...
Hi Everyone, I need to send a hard coded message to the users just before every daylight savings of the year saying "Daylight savings is scheduled tomorrow, please be alerted " and i don't want to use any index for the that but just hard coded message. Is it possible to create an alert based on the requirement.
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but bec... See more...
I have an app server running a custom application that is, unfortunately, a bit buggy.  This bug causes it's service to spike in CPU usage and degrade performance.  There's a fix in the works but because I can manually resolve it by restarting the service it is lower on the priority list. I currently use Splunk to send me an alert when CPU usage gets to 80% or more - this lets me get in there to do the reset before performance degrades. It looks like Splunk used to have a simple feature to run a script on the UF's /bin/ directory, which would have made this pretty simple - but it is deprecated and I assume doesn't work at all.  Now, however, we're supposed to create a custom alert action to reinvent this alert action. Following the basic directions here, I've come to find I don'thave the ability to create a new Alert Action:  Create alert actions - Splunk Documentation I can "Browse More" and view the existing ones, but there's no ability to create anything new.  Is there some sort of pre-requisite before these can be done?  It does not appear to be mentioned in this documentation if that's the case. Alternatively, does Splunk still trigger scripts even though the feature is deprecated?  The above needs learned but seems like a lot of overhead to have one specific server run net stop [service] && net start [service].
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below... See more...
Hi Team,   We have recently installed OCI add-on in splunk heavy forwarder to collect the OCI log's from oracel cloud instances. after insalling and configuring the OCI input's we are getting below error's. can you please help us with the resolution.   12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" oci._vendor.requests.exceptions.SSLError: (MaxRetryError("OCIConnectionPool(host='cell-1.streaming.XX.XX.XX.oci.oraclecloud.com', port=443): Max retries exceeded with url: /20180418/streams/XX.XX.XX/groupCursors (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)')))"), 'Request Endpoint: POST https://XX.XX.XX/groupCursors See https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_troubleshooting.htm for help troubleshooting this error, or contact support and provide this full error message.') 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" During handling of the above exception, another exception occurred: 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 121, in worker 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" result = (True, func(*args, **kwds)) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 102, in get_messages 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" cursor = get_cursor_by_group(global_stream_clients[i], stream_id, stream_id, opt_partition) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 59, in get_cursor_by_group 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = sc.create_group_cursor(sid, cursor_details) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/streaming/stream_client.py", line 505, in create_group_cursor 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" api_reference_link=api_reference_link) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/retry/retry.py", line 308, in make_retrying_call 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = func_ref(*func_args, **func_kwargs) 12-19-2024 14:20:13.722 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 485, in call_api 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" response = self.request(request, allow_control_chars, operation_name, api_reference_link) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci/base_client.py", line 606, in request 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise exceptions.RequestException(e) 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" """ 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" The above exception was the direct cause of the following exception: 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" Traceback (most recent call last): 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py", line 510, in stream_events 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" get_response = r.get() 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" File "/apps/splunk/etc/apps/TA-oci-logging-addon/bin/multiprocess/pool.py", line 657, in get 12-19-2024 14:20:13.723 +0000 ERROR ExecProcessor [30077 ExecProcessor] - message from "/apps/splunk/bin/python3.7 /apps/splunk/etc/apps/TA-oci-logging-addon/bin/oci_logging.py" raise self._value
I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community... See more...
I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitor-SQLite-database-file-with-Splunk-DB-Connect/m-p/294331   I am actually able to see the driver in the installed drivers tab, and I can see my stanza within possible connections when trying to test a query:   I used exactly what was in that previous question and that didn't work, and I tried several other changes, and currently have this: db_connection_types.conf: [sqlite] displayName = SQLite serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = org.sqlite.JDBC jdbcUrlFormat = jdbc:sqlite:<database> ui_default_catalog = main database = main port = 443 db_connections.conf: [incidents] connection_type = sqlite database = /opt/tece/pb_data/data.db host = localhost identity = owner jdbcUrlFormat = jdbc:sqlite:<database> jdbcUseSSL = 0 I am getting this error now:   I also see this in the logs: 2024-12-19 14:38:59.018 +0000 Trace-Id= [dw-36 - GET /api/inputs] INFO c.s.d.s.dbinput.task.DbInputCheckpointFileManager - action=init_checkpoint_file_manager working_directory=/opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect 2024-12-19 14:39:15.807 +0000 Trace-Id=6dac40b0-1bcc-4410-bc28-53d743136056 [dw-40 - GET /api/connections/incidents/status] WARN com.splunk.dbx.message.MessageEnum - action=initialize_resource_bundle_files error=Can't find bundle for base name Messages, locale en_US I have tried 2 seperate SQLite drivers, the most up to date one, and the one specifically for the version of the database of SQLite that I am using. Anyone have any ideas?
Hi i have a below query where I'm calculating the total prod server count in first dataset and in second dataset I'm plottting a timechart for the server count. what i want to display is a line chart... See more...
Hi i have a below query where I'm calculating the total prod server count in first dataset and in second dataset I'm plottting a timechart for the server count. what i want to display is a line chart with total prod server showing as threshold and line and the below line chart as server count index=data sourcetype="server" | rex field=_raw "server=\"(?<EVENT_CODE>[^\"]*)" | search [ | inputlookup prodata_eventcode.csv | fields EVENT_Code ] | stats dc(host_name) as server_prod_count |rename | append [ | search index=appdata source=appdata_value | rex field=value "\|(?<Item>[^\|]+)?\|(?<EVENT_CODE>[^\|]+)|(?<PROD_Count>[^\|]+)?" | dedup DATE,EVENT_CODE | timechart span=1d sum(PROD_Count) as SERVER_COUNT] | table _time,local_PROD_COUNT,snow_prod_count | rename DYNA_PROD_COUNT as SERVER_COUNT,snow_prod_count as Threshold Question is how can  i get the threshold value in all the rows so that i can plot threshold vs server count in the line graph  Below is the snapshot   
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same i... See more...
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same index but each has it own sourcetype. File copy location events has logs for each file but Target location has a logs which has multiple files names. Log format of filecopy location: 2024-12-18 17:02:50 , file_name="XYZ.csv",  file copy success  2024-12-18 17:02:58, file_name="ABC.zip", file copy success  2024-12-18 17:03:38, file_name="123.docx", file copy success 2024-12-18 18:06:19, file_name="143.docx", file copy success Log format of Target Location: 2024-12-18 17:30:10 <FileTransfer status="success>                                               <FileName>XYZ.csv</FileName>                                              <FileName>ABC.zip</FileName>                                              <FileName>123.docx</FileName>                                                </FileTransfer> Desired result:       File Name                  FileCopyLocation               Target Location       XYZ.csv                  2024-12-18 17:02:50          2024-12-18 17:30:10       ABC.zip                   2024-12-18 17:02:58          2024-12-18 17:30:10       123.docx                2024-12-18 17:03:38          2024-12-18 17:30:10        143.docx               2024-12-18 18:06:19            Pending I want to avoid join.
What protocols does the Windows Add on use to collect data and sent it to the Splunk server? HTTPS?
Hello there. I would like to ask about Splunk best practices, specifically regarding cluster architecture. One suggested practice is to configure all Splunk servers running Splunk Web (aka: a search ... See more...
Hello there. I would like to ask about Splunk best practices, specifically regarding cluster architecture. One suggested practice is to configure all Splunk servers running Splunk Web (aka: a search head) as members of the indexer cluster, (at least that is what I hear from the architecture lesson). For example, there is a Splunk deployer. I need to use this command or achieved through web: splunk edit cluster-config -mode searchhead -manager_uri https://x.x.x.x:8089 (indexer cluster manager IP) -secret idxcluster Another one suggested practice is adding the Splunk servers (mention above such as deployers) to distributed search > search peers as well in manager. I would like to know why these are good practice and what are the benefits of doing these. (The deployer is not really a search head?) Thank you.
I want to increase one of my index frozen Time Period from 12 months to 13 months. I have increased the Max Size of Entire Index from the Splunk indexer > Settings. But I know this is not enough as m... See more...
I want to increase one of my index frozen Time Period from 12 months to 13 months. I have increased the Max Size of Entire Index from the Splunk indexer > Settings. But I know this is not enough as my index frozen Time Period is set on 12 months period. So where should I update this value ? Should I need to update 'Indexes.conf' file for required indexes to the indexer server itself which is installed on Linux machine. What things I need to take care while updating this frozen Time Period.    
How High is the Incoming Data Volume for Monitoring ??? Where are the Data stored ?
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm... See more...
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm curious if there's an alternative method to send logs using the Splunk HTTP Event Collector (HEC) exporter. According to the documentation here, the Splunk HEC exporter allows the OpenTelemetry Collector to send traces, logs, and metrics to Splunk HEC endpoints, supporting traces, metrics, and logs. Is it also possible to use fluentforward, otlphttp, or signalfx or anything else for this purpose? Additionally, I have an EC2 instance running the splunk-otel-collector service, which successfully sends infrastructure metrics to the Splunk Observability Cloud. Can this service also facilitate sending logs to the Splunk Observability Cloud? According to the agent_config.yaml file provided bysplunk-otel-collector service, there are several pre-configured service settings related to logs, including logs/signalfx, logs/entities, and logs. These configurations utilize different exporters such as splunk_hec, splunk_hec/profiling, otlphttp/entities, and signalfx. Could you explain what each of these configurations is intended to do?   service: extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [otlphttp, signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway, signalfx] metrics: receivers: [hostmetrics, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx, statsd] # Use instead when sending to gateway #exporters: [otlp/gateway] metrics/internal: receivers: [prometheus/internal] processors: [memory_limiter, batch, resourcedetection, resource/add_mode] # When sending to gateway, at least one metrics pipeline needs # to use signalfx exporter so host metadata gets emitted exporters: [signalfx] logs/signalfx: receivers: [signalfx, smartagent/processlist] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] logs/entities: # Receivers are dynamically added if discovery mode is enabled receivers: [nop] processors: [memory_limiter, batch, resourcedetection] exporters: [otlphttp/entities] logs: receivers: [fluentforward, otlp] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [splunk_hec, splunk_hec/profiling] # Use instead when sending to gateway #exporters: [otlp/gateway]   Thanks!
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or someth... See more...
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or something else. But unfortunately when it trigger i can't see anything like the result. Maybe i need to config something in my Windows or Something ?