It's simple, really, since we know what precedes and follows the desired field. Just put the known text into the regular expression and add a named capture group between them. The pattern for the c...
See more...
It's simple, really, since we know what precedes and follows the desired field. Just put the known text into the regular expression and add a named capture group between them. The pattern for the capture group can be either a non-greedy match of anything (.*?) or match anything that is not what follows the field ([^\|]+).
I suppose I'll ask: did you verify network connectivity between the host (with I presume a UF) and the HFs? And the HFs and the Indexing peers? Makig sure there are no issues with switches or firewal...
See more...
I suppose I'll ask: did you verify network connectivity between the host (with I presume a UF) and the HFs? And the HFs and the Indexing peers? Makig sure there are no issues with switches or firewalls (opening needed ports etc.)
@Prajapati- Please put the question along with details for question that is not directly related to the core Splunk Enterprise (ex. here question is related to some third-party integration for Virus-...
See more...
@Prajapati- Please put the question along with details for question that is not directly related to the core Splunk Enterprise (ex. here question is related to some third-party integration for Virus-Total) so that people on the community can help. I have also moved your question to right board of the community.
Yes @inventsekar , I'm able to verify the inputs. /opt/splunkforwarder # ./bin/splunk btool inputs list --debug | grep tcp
/opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/inputs.conf ...
See more...
Yes @inventsekar , I'm able to verify the inputs. /opt/splunkforwarder # ./bin/splunk btool inputs list --debug | grep tcp
/opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/inputs.conf [splunktcp]
/opt/splunkforwarder/etc/system/default/inputs.conf [tcp]
/opt/splunkforwarder/etc/system/local/inputs.conf [tcp://10.196.246.1:7514]
/opt/splunkforwarder # ./bin/splunk btool outputs list --debug | grep tcp
/opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default/outputs.conf [tcpout]
/opt/splunkforwarder/etc/system/default/outputs.conf tcpSendBufSz = 0
/opt/splunkforwarder/etc/system/local/outputs.conf [tcpout:ib_group]
Hi @mooree from the UF, do you receive other regular logs/app logs to the indexer? using the btool, pls verify if the perfmon input is getting read by UF.. $SPLUNK_HOME$/bin/splunk btool inputs ...
See more...
Hi @mooree from the UF, do you receive other regular logs/app logs to the indexer? using the btool, pls verify if the perfmon input is getting read by UF.. $SPLUNK_HOME$/bin/splunk btool inputs list --debug
Hi @BlueQ when you search, do you get the results? index=linuxORsomething source=/var/log/messages* on Splunk, pls show us a search result with the /var/log/messages as events please, thanks.
Hi @NoamP .. i assume that the logs are already onboarded to Splunk indexer. (suggest us how do you onboarded the logs pls) could you pls show us some sample logs pls, then the SPL query can be cre...
See more...
Hi @NoamP .. i assume that the logs are already onboarded to Splunk indexer. (suggest us how do you onboarded the logs pls) could you pls show us some sample logs pls, then the SPL query can be created easily.. thanks.
Bellow mentioned table is an example which having same index and sourcetype, but it have a different source. I need to search a field from 1st file and the result should be a combination of fields ...
See more...
Bellow mentioned table is an example which having same index and sourcetype, but it have a different source. I need to search a field from 1st file and the result should be a combination of fields from file 1 and 2. File 1 T1_Fld 1 T1_Fld 2 Domain T1_Fld 4 T1_Fld 5 AAA xxx google.com yy1 bbb AAB xxx Facebook.com yy2 bbb AAB xxx Gmail.com yy3 bbb AAD xxx Yahoo.com yy4 bbb AAE xxx xxx.com yy5 bbb File 2 Domain IP google.com 1.1.1.1 Facebook.com 2.2.2.2 Gmail.com 3.3.3.3 Yahoo.com 4.4.4.4 xxx.com 5.5.5.5 consider i am running a search where T1_Fld 1=AAB then the result table form should be like below. Output T1_Fld 1 Domain IP T1_Fld 4 AAB Facebook.com 2.2.2.2 yy2 AAB Gmail.com 3.3.3.3 yy3
Hi @gcusello @ITWhisperer , Thanks for your response. Regarding the solution which you are recommending, I agree to your point. Ideally, the sourcetypes should be different for different types o...
See more...
Hi @gcusello @ITWhisperer , Thanks for your response. Regarding the solution which you are recommending, I agree to your point. Ideally, the sourcetypes should be different for different types of events. However, in our case, we are having parent-child relationship of the sourcetypes. We are bifurcating the child sourcetype from the parent sourcetype. We observed that the TIME_PREFIX extractions were not getting applied if we defined them in the child stanza. Seems like Splunk first performs the timestamp extractions from parent and then the renaming of the sourcetype happens. So, we are trying to figure out a way in which we can handle multiple event format in the parent sourcetype stanza itself.
We have configured inputs.conf with tcp to fetch the logs from streaming and send logs to Splunk server via TCP output. Logs are not being forwarded to Splunk server. Could someone please share the ...
See more...
We have configured inputs.conf with tcp to fetch the logs from streaming and send logs to Splunk server via TCP output. Logs are not being forwarded to Splunk server. Could someone please share the proper set of inputs.conf and outputs.conf for reading the logs from TCP inputs ? inputs.conf [tcp://1.2.3.4:7514]
connection_host=ip
queueSize=10MB
persistentQueueSize=50MB
index=test_data
sourcetype=testdata
_TCP_ROUTING=ib_group outputs.conf [tcpout:ib_group]
server=1.2.3.4:9997
useACK=false
@catherinelam A warm standby is only ever 2 servers, 1 Parent & 1 Child. The Parent syncronises to the Child via postgres sync and rsync for shared files. The failover is still manual but can be ...
See more...
@catherinelam A warm standby is only ever 2 servers, 1 Parent & 1 Child. The Parent syncronises to the Child via postgres sync and rsync for shared files. The failover is still manual but can be scripted if you have the right probe setup on the LB to check and alert when the primary becomes unavailable. Personally I think using AWS functionality to restore will give you a quicker time to recovery.
Splunk is faliing to collect perfmon data from our Windows 2022 servers. I've extracted and deployed the stanzas from the Splunk TA for windows to collect selected perfmon stats from servers. We us...
See more...
Splunk is faliing to collect perfmon data from our Windows 2022 servers. I've extracted and deployed the stanzas from the Splunk TA for windows to collect selected perfmon stats from servers. We use a deployment server to push this out. Here's a sample: [perfmon://CPU]
counters = % Processor Time
disabled = 0
instances = *
interval = 10
mode = single
object = Processor
useEnglishOnly=true
index=2_###_test The Splunk Universal Forwarder now restarts as expected on deployment (missed that first time ) . There are no apparent errors in splunkd.log. Nothing turns up! Metrics confirms nothing being sent to that index from the UF. I'm guessing that our Security lockdown is preventing collection, but with no error messages anywhere it's hard to diagnose! Perfmon works on the server target so we know that the data is there and working. Splunk is 9.2.1. it's running in "least privilege" mode on the UF (the new default). Any hints and pointers most welcome!
Hello @Michael.Mom , Thanks for posting to the AppDynamics Community. Your Question: is it correct to assume that the best approach is to provision an EC2 instance (or AWS workspace) in my AWS...
See more...
Hello @Michael.Mom , Thanks for posting to the AppDynamics Community. Your Question: is it correct to assume that the best approach is to provision an EC2 instance (or AWS workspace) in my AWS environment with the appropriate VPC / RDS security group settings and install an agent?
Brief Answer: not necessarily. We can avoid provisioning a new EC2 instance solely for this purpose.
Analysis & Observations:
The AppDynamics Database Agent can be run from any machine as long as it has network access to your RDS and the Controller.
This means you can leverage any existing on-premises server or, for scalability, any pre-existing AWS EC2 instance, including your controller if applicable.
But, it’s important to ensure that your chosen machine has the necessary network access. 1. Machine to RDS: Ensure the machine can communicate with the RDS over the necessary ports (default: 1433 for SQL Server). 2. Machine to Controller: If the machine and the controller are on different hosts, ensure the machine can access the controller over the internet.
Step by step guide :
Ensure Resource Requirements. ensure your machine has sufficient resources. Database Visibility System Requirements
Ensure your SQL Server version is supported.
Database Visibility Supported Environments
Install the Database Agent. Install the Database Agent Note: You can configure access to the controller using the controller-info.xml file. This can be found in the “Configure the Agent” section of the documentation above.
Configure Database Collectors. After installing the agent, you need to configure database collectors. Refer to the following documentation:
Add Database Collectors
Configure Microsoft SQL Server Collectors
Database User Permissions. Ensure that the database user has the appropriate permissions. For SQL Server on AWS RDS, refer to: Microsoft SQL Server on AWS RDS Permissions
Hope this helps.
Best Regards,
Martina
Hey, I would love to get help I want to build a query to be a rule that will monitor DNS requests I work with two INDEXES in one of them (INDEX1) I need the following fields src , query , directi...
See more...
Hey, I would love to get help I want to build a query to be a rule that will monitor DNS requests I work with two INDEXES in one of them (INDEX1) I need the following fields src , query , direction and I want that according to the results I got from this 1INDEX the second 2INDEX will take the query field and check what category it falls under In the second (2INDEX) the query field is equivalent to the DOMAIN field and the category field does not exist in INDEX1
hi @ITWhisperer , thanks for your answer , I have the following query: (sourcetype="mysource1" OR sourcetype="mysource2") AND (Node__name="myserver_name" OR (object__Name="myserver_name*") OR ...
See more...
hi @ITWhisperer , thanks for your answer , I have the following query: (sourcetype="mysource1" OR sourcetype="mysource2") AND (Node__name="myserver_name" OR (object__Name="myserver_name*") OR (location__Name="*myserver_name*")) What I am trying to achieve is to assign the value "myserver_name" to a variable (e.g., servername) in order to avoid repetition. This way, if I need to modify the query, I only have to update the declared variable. I am looking for something like this: | eval servername = "myserver_name" (sourcetype="mysource1" OR sourcetype="mysource2") AND (Node__name=servername OR (object__Name=servername) OR (location__Name=servername)) This would allow me to use the variable servername instead of repeating the value "myserver_name" multiple times in the query. i hope that it's clear now !
If I have a histogram metric, for example request_duration_seconds_bucket, request_duration_seconds_count and request_duration_sum -- how do I plot these onto a heatmap in the Splunk Analytics worksp...
See more...
If I have a histogram metric, for example request_duration_seconds_bucket, request_duration_seconds_count and request_duration_sum -- how do I plot these onto a heatmap in the Splunk Analytics workspace?