All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The scenario is there are 100 endpoints sending logs to there internal inhouse syslog server. We need to deploy Splunk here. So that admin will be able to monitor logs on Splunk Enterprise. Make sure... See more...
The scenario is there are 100 endpoints sending logs to there internal inhouse syslog server. We need to deploy Splunk here. So that admin will be able to monitor logs on Splunk Enterprise. Make sure both the Universal Forwarder and Splunk Enterprise should be present in the same syslog server. I am here for the steps I need to follow for this deployment.  I am mentioning below the steps I am thinking to take place. 1.) First I am thinking to install Splunk Enterprise on the server and then to install universal forwarder. 2.) During the installation process of universal forwarder I choose local system rather then domain deployment, then in deployment server i have to leave it blank and on receiver server I have to put the syslog server's IP address and port number which I can be able to get by running command ipconfig on cmd. 3.) I need to download Microsoft add on Splunk base on the same server. 4.) Extract the Splunk base file and create a local folder in Splunkforwarder > etc and paste the input.conf file there and do the required changes. 5.) Then I will be able to get all the syslog server's log on Splunk Enterprise. Please correct me, or add other steps which I need to follow.
I can't see any obvious issue with your code. What happens if you include debug log statements that output the report_id value, and then the resulting URL? Assuming logging mode is set to debug: ... See more...
I can't see any obvious issue with your code. What happens if you include debug log statements that output the report_id value, and then the resulting URL? Assuming logging mode is set to debug: helper.log_debug(f"Report ID is: {report_id}") url = f"https://example_url/{report_id}/download" helper.log_debug(f"URL is: {url}") headers = { "accept": "application/json", "Authorization": f"Bearer {jwt_token}", }  
Hi @sainag_splunk , Also please explain what is index peers you mean and index cluster bundle? Please reply
A streaming language generally do not use command branching.  However, SPL has plenty of instruments to obtain the result you want.  So, let me rephrase your requirement. What I want is to extract ... See more...
A streaming language generally do not use command branching.  However, SPL has plenty of instruments to obtain the result you want.  So, let me rephrase your requirement. What I want is to extract from events is a vector of three components, field1, field2, field3.  The method of extraction is based on whether the event contains dog or cat. To illustrate, given this dataset _raw a b c |i|j|k| Dog woofs l m n |x|y|z| Cat meows e f g |o|p|q| What does fox say? I want the following results _raw field1 field2 field3 a b c |i|j|k| Dog woofs a b c l m n |x|y|z| Cat meows x y z e f g |o|p|q| What does fox say?       (This is based on reverse engineering your regex.  As I do not know your real data, I have to make the format more rigid to make the illustration simpler.) Let me demonstrate a conventional method to achieve this in SPL.   | rex "(?<field1_dog>\S+)\s(?<field2_dog>\S+)\s(?<field3_dog>\S+)\s" | rex "\|(?<field1_cat>[^\|]+)\|(?<field2_cat>[^|]+)\|(?<field3_cat>[^|]+)\|" | foreach field1 field2 field3 [eval <<FIELD>> = case(searchmatch("Dog"), <<FIELD>>_dog, searchmatch("Cat"), <<FIELD>>_cat)] | fields - *_dog *_cat   As you can see, the idea is to apply both regex's, then use case function to selectively populate the final vector. This idea can be implemented in many ways. Here is the emulation that generates my mock data.  Play with it and compare with real data.   | makeresults format=csv data="_raw a b c |i|j|k| Dog woofs l m n |x|y|z| Cat meows e f g |o|p|q| What does fox say?"   In many traditional languages, the requirement can also be expressed as conditional evaluation. While this is less conventional, you can also do this in SPL, usually with more cumbersome code.
Can i know what were the changes down on values file? Otel chart I was able to get in the Github project
I'm a little bit lost on your architecture to be honest. But if I understand your later comments correctly, you want to restrict teams responsible for sending the data from sending to unauthorized i... See more...
I'm a little bit lost on your architecture to be honest. But if I understand your later comments correctly, you want to restrict teams responsible for sending the data from sending to unauthorized indexes, right? It can be tricky depending on the overall ingestion process but while "normal" s2s has no restrictions on the sent data so as long as you accept data from a forwarder you're accepting it into whatever index it's meant for, the new Splunk versions let you limit s2s-over-http connections only to given index(es) authorized for particular HEC token. If we're talking syslog here then you should handle it on the syslog daemon level.
what i want to say is:   if _raw contains the word "Dog" then rex "(?<field1>([^\s]+))\s(?<field2>([^\s]+))\s(?<field3>([^\s]+))\s" if _raw contains the word "Cat" then rex "(?<field1>([^\|]+))\|(... See more...
what i want to say is:   if _raw contains the word "Dog" then rex "(?<field1>([^\s]+))\s(?<field2>([^\s]+))\s(?<field3>([^\s]+))\s" if _raw contains the word "Cat" then rex "(?<field1>([^\|]+))\|(?<field2>([^\|]+))\|(?<field3>([^\|]+))\|" because if the line contians Dog, fields are delimited by spaces but if it contains Cat, fields are delimited by pipe symbol. I want the same field names just need to use a different rex based on delimiters. I cant formulate one rex that contains both delimiters    
yeah i am getting syntax error Invalid Argument on rex
The segregated data should be written to separate files to monitored by separate inputs.conf stanzas ----> where I need to give this inputs.conf? In deployment server? Because in UF deploymentclient.... See more...
The segregated data should be written to separate files to monitored by separate inputs.conf stanzas ----> where I need to give this inputs.conf? In deployment server? Because in UF deploymentclient.conf will be given right?  how to get data from particular FQDN from syslog server to indexer finally? What conf should be given? Where to declare index for that? Please be specific please 
inputs.conf is part of the Splunk Universal Forwarder configuration and is sent out by the Splunk Deployment Server. I don't understand the second question.  The UF does not write to log paths, exce... See more...
inputs.conf is part of the Splunk Universal Forwarder configuration and is sent out by the Splunk Deployment Server. I don't understand the second question.  The UF does not write to log paths, except for its own (internal) logs.
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help... See more...
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help configuring it specifically for the China tenant. I understand that the endpoints for China are different from the global Microsoft 365 environment. For instance: Graph API Endpoint: https://microsoftgraph.chinacloudapi.cn AAD Authorization Endpoint: https://login.partner.microsoftonline.cn Could someone provide step-by-step instructions or point me to the necessary configuration files (like inputs.conf) or documentation to correctly set this up for: Subscription to O365 audit logs Graph API integration Event collection Additionally, if there are any known challenges or limitations specific to the China tenant setup, I’d appreciate insights on those as well. Thank you in advance for your guidance! Tilakram
edit_tcp_stream edit_upload_and_index input_file search were the needed capabilities to show the "Add Data" button under Settings in Splunk Cloud, specifically. I think the only ones you r... See more...
edit_tcp_stream edit_upload_and_index input_file search were the needed capabilities to show the "Add Data" button under Settings in Splunk Cloud, specifically. I think the only ones you really do need though are "edit_upload_and_index" and "search" - specifying the indexes available at the role level did limit the shown indexes in the drop-down when going through the add data workflow.  
Where I need to give this inputs.conf? Are you telling about log paths what we write in UF?
I'm sure the syslog server has the ability to segregate traffic by a number of factors - including IP address and perhaps FQDN.  The segregated data should be written to separate files to monitored b... See more...
I'm sure the syslog server has the ability to segregate traffic by a number of factors - including IP address and perhaps FQDN.  The segregated data should be written to separate files to monitored by separate inputs.conf stanzas.  Each monitored file can have a different destination index.
Use a Rising input.  This is where DBX keeps track of the last value seen for a specific field so subsequent queries can fetch newer values. See https://docs.splunk.com/Documentation/DBX/3.18.1/Depl... See more...
Use a Rising input.  This is where DBX keeps track of the last value seen for a specific field so subsequent queries can fetch newer values. See https://docs.splunk.com/Documentation/DBX/3.18.1/DeployDBX/Createandmanagedatabaseinputs#Choose_input_type for details.
I figured this out.  I had 2 data sets in my model and when I was specifying the datamodel=XXX, I didnt pass a dataset after.  So by default this will assume the first listed dataset and work.  When ... See more...
I figured this out.  I had 2 data sets in my model and when I was specifying the datamodel=XXX, I didnt pass a dataset after.  So by default this will assume the first listed dataset and work.  When I was trying to run the query associated with the other dataset it wouldnt.   Simply adding the dataset name as an argument got it working.
I can not tell from the Aruba documentation very easily, but I would hazard a guess from what I do see is this.  The Aruba devices likely forward logs via syslog or HEC, in either case sort out which... See more...
I can not tell from the Aruba documentation very easily, but I would hazard a guess from what I do see is this.  The Aruba devices likely forward logs via syslog or HEC, in either case sort out which it is and then follow your Splunk instance current ingestion methods for either of those transport mechanics.
Sorry if this is troubling everyone... I am new to Splunk admin and still learning.. We have network logs coming and it will be collected via dedicated syslog server (configuring it using FQDN) and ... See more...
Sorry if this is troubling everyone... I am new to Splunk admin and still learning.. We have network logs coming and it will be collected via dedicated syslog server (configuring it using FQDN) and it will be forwarded to our indexers via UF installed on that server.  Currently we have deployment server which forwarded all the logs to indexer via created index and then in cluster manager we are writing props.conf and transforms.conf in such a way that specific FQDN go to specific indexname which is already mentioned in the logs ( we will give them the indexname). Where else can we right this rule I mean props and transforms? Can we write it in dep server? Can we do this anyway easier and faster? If yes please help me with the exact approach anyone...it will be really helpful for me...
Logs are now coming in as expected.  Couple things that threw me off. - Besides adding the index to the dashboard portlet searches, i had to examine the XML to modify (add index) the base search at... See more...
Logs are now coming in as expected.  Couple things that threw me off. - Besides adding the index to the dashboard portlet searches, i had to examine the XML to modify (add index) the base search at the top so the associated drop downs and results portlet at the bottom of the dashboard worked. -  Changing the data inputs source type from 'Automatic' to 'From list' -> 'terraform_cloud' didn't take. It would revert back to 'Automatic' but in the end the source type is still correctly attached to the logs and fields are extracted.  - Lack of documentation. Wasn't sure of the index, source, host, source type, polling interval, log level, etc. Could maybe be added to the setup page? Appreciate just having the app though.
Start in the DMC to do CPU performance comparison on the various instances or try this search. index=_introspection host=<replace-with-hostname> sourcetype=splunk_resource_usage component=PerProcess... See more...
Start in the DMC to do CPU performance comparison on the various instances or try this search. index=_introspection host=<replace-with-hostname> sourcetype=splunk_resource_usage component=PerProcess "data.pct_cpu"="*" | rename data.* as * | eval processes=process_type.":".process.":".args | timechart span=10s max(pct_cpu) as pct_cpu by processes This is assuming HF, you didn't specify but if it's UF there is something similar just a bit different.