hi @andrewtrobec You can run this below query to get license details. | rest /services/licenser/licenses
| transpose ~ If the reply helps, a karma upvote would be appreciated.
... View more
hi @Maries It would be wise to take help from Splunk professional services and AWS to do this activity. Especially for managing data, avoiding loss of data, duplication of events and above all data integrity. Do also consider the security, compliance and regulatory requirements for the company.
... View more
hi @Luckyani You will be using the Splunk Add-on for AWS (Splunk Add-on for Amazon Web Services (AWS) | Splunkbase) You'll need to create and configure an IAM role in the add-on input, that has permission to read the S3 resources hosted by the third party. Depending on the trust relationship between your on-prem and third party resources, you'll have to allow the traffic from their AWS infra to yours ~ whitelisting traffic. This needs to be done on both sides. Ensure the IAM role specifies the trust relationship and is enabled to use correct permissions. Would strongly recommend avoiding overly permissive permissions, restrict on resources and limit actions. Once you configure the account and specify the role correctly in the add-on, you should be able to see the details of resources accessible from the third party AWS instance. Configuring account in add-on Manage accounts for the Splunk Add-on for AWS - Splunk Documentation I would strongly recommend using SQS-based S3 input configuration for S3 data sources. This scales well for high frequency changes of datasets inS3. Configure SQS-based S3 inputs for the Splunk Add-on for AWS - Splunk Documentation If the reply helps, a karma upvote would be appreciated.
... View more
hi @Mockjin assuming your field & value is like this inputfield= "test1,test2" you can do something like this | tstats values(PREFIX(test_content=)) as test_content
where index=testindex AND (TERM(host=mvindex(split(inputfield,","),0)) OR TERM(host=mvindex(split(inputfield,","),1))
by _time PREFIX(host=)
... View more
hi @MousumiChowdhur depending on what services you want to collect data for, configure the inputs accordingly. https://docs.splunk.com/Documentation/AddOns/released/MSO365/Configuretenant https://docs.splunk.com/Documentation/AddOns/released/MSO365/ConfigureinputsmanagementAPI https://docs.splunk.com/Documentation/AddOns/released/MSO365/Configureinputs
... View more
you can follow this document from splunk about securing your infrastructure. https://docs.splunk.com/Documentation/Splunk/9.0.4/Security/AboutsecuringyourSplunkconfigurationwithSSL https://docs.splunk.com/Documentation/Splunk/9.0.4/Security/RenewExistingCerts If the reply helps, karma vote would be appreciated!
... View more
I was able to resolve these issues by clearing browser cache and cookies while testing some input configurations for Salesforce Streaming add-on.
... View more
You can use OR condition in your search and use stats where the events match, instead of doing a join operation. index=myindex
("Processing started") OR ("Processing finished with result")
|stats count by id
|where count>1
... View more
Check the port configured for HEC, it should be port 8088. You are hitting the web console on port 8000. Some default ports.. 8000 Web (default for clients to the Splunk Search page) 8089 Management/Rest API & Distributed Search (default) 9997 Indexing Receiver( for forwarders to the Splunk indexer) 8181 Search replication 8080 Index replication 8191 KV store/replication 8088 http Event Collector 8065 Splunk App Server 514 Legacy syslog input(UDP/TCP) 1433 DB Connector(to fetch data from databases to Splunk)
... View more
There could be many reasons for events not reaching Splunk. From network configurations to permissions from the cluster side or even port configuration for HEC. Have you tried sending a simple curl message using HEC token to the Splunk from the cluster instance to see if its reaching?
... View more
I can see an exclamation mark on the job for the first screenshot. You'll need to inspect the job for the scheduled search and see if there are any issues identified or indexers unable to provide results as part of the search execution. Ensure that the scheduled searches always complete their executions and schedule them with correct priority.
... View more
Hi @weidertc You'll have to write a <change> block and set the tokens that needs an update to the desired value or its default value as per your use case.
... View more
Hi @Khanu89 For your pie-chart, in the xml code add the following option configuration. <option name="charting.chart.showPercent">1</option> You should be able to see the percentage details against each category in the chart. Something like below. If it helps, Karma vote is appreciated
... View more
Hi @Marco_Develops Try the following. Update your search this way.. your base search
|chart count over SEVCAT And configure the dashboard panel drilldown to the following. If it helps, karma points are appreciated!
... View more
Hi @uagraw01 In your search terms for the services, replace that with "ERROR" Update your extractions for Error Code this way | rex "errorCode=(?P<EC1>[\d]+)"
| rex "Error Code :: (?P<EC2>[\d]+)"
| eval ErrorCode=coalesce(EC1,EC2)
| where ErrorCode>499 And continue to extract service and other fields as required and check the results. If the result yields services that need not be included in your result, try to add them using the NOT keyword as part of your search like NOT("xyz-service" OR "abc-service")
... View more
Hi @amitru Try something like this and then do a stats command to get insights. |rex field=_raw "URI\s\:\s(?P<URL>[\w\:\/\.]+)"
|rex field=_raw "loginId\"\:\s\"(?P<UserID>[\w]+)\""
|stats count by UserID URL
... View more
Hi @ilanaKarten0333 You'll have to first identify the key field/entity to get your statistics of events. For example, your key fields could be "trigger name" or "IsEntityInBlackList: Entity" as seen from the above logs. Once you identify the field you are interested to view the events for, use the "stats" command to get insights. Example: | stats count by trigger_name
... View more
Hi @ychoo I think you'll need to update the sourcetype for the log from text to json to correctly parse the logs looking at a good long term solution. If not you can proceed with using spath or regex for field extractions. source="D:\\Learn Splunk\\test.txt" sourcetype="text"
| rex field=_raw "date\"\:\"(?P<SourceTimestamp>[\d\-\:\.T]+)\"\},\"s\"\:\"(?P<Severity>[\w]+)\",\s\"c\"\:\"(?P<Component>[\w]+)"
| rex field=_raw "ctx\"\:\"(?P<Context>[\w]+)\",\"msg\"\:\"(?P<Message>[\w\s]+)"
| table _time SourceTimestamp Severity Component Context Message
... View more
Hi @Borntowin You can increase the Canvas dimensions and setting it to your needs. Note: You'll need to use the Absolute mode of Dashboard creation in Dashboard studio. If it helps, Karma points will be appreciated.
... View more
Hi @hieuba6868 , Write your first part of search to fetch from WinHostMon sourcetype and later on using a common denominator field write a stats command to correlate the information from lookup/wineventlog. Or you can write a join query to fetch data from the 2 sourcetypes, whichever one is feasible and not resource taxing.
... View more