All Topics

Top

All Topics

Hello, I am new to Splunk. From where I need to start learning from zero to hero basis? I need to go till advanced knowledge. Gone through documentation but it is not systematic.  Please sugges... See more...
Hello, I am new to Splunk. From where I need to start learning from zero to hero basis? I need to go till advanced knowledge. Gone through documentation but it is not systematic.  Please suggest some good documentation or systematic videos.
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields ... See more...
Hello Splunkers!! I have a raw event but the fields server ip and server name are not present in this raw event. And I need to extract both these fields in Splunk during index time. Both the fields having static values. What attribute should I use in props and transform so that I can get both these files? Servername="mobiwick" ServerIP ="10.30.xx.56.78"   Sample raw data : <?xml version="1.0" encoding="utf-8"?><StaLogMessage original_root="ToLogMessage"><MessageId>6cad0986-d4b2-45e2-b5b1-e6a1af3c6d40</MessageId><MessageTimeStamp>2024-11-24T07:00:00.1115119Z</MessageTimeStamp><SenderFmInstanceName>TOP/Top</SenderFmInstanceName><ReceiverFmInstanceName>BPI/Bpi</ReceiverFmInstanceName><StatisticalElement><StatisticalSubject><MainSubjectId>NICKER</MainSubjectId><SubjectId>Prodtion</SubjectId><SubjectType>PLAN</SubjectType></StatisticalSubject><StatisticalItem><StatisticalId>8</StatisticalId><Period><TimePeriodEnd>2024-11-24T07:00:00Z</TimePeriodEnd><TimePeriodStart>2024-11-24T06:00:00Z</TimePeriodStart></Period><Value>0</Value></StatisticalItem></StatisticalElement></SogMessage>
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single... See more...
Hello, I need help regarding an add-on which I built. This add-on was build using the Splunk Add-on Builder and it passed all the tests and can be installed on Splunk Enterprise and also on a single instance on Splunk Cloud. However, when it is installed on a cluster it does not work properly. The add-on when installed is supposed to create some CSVs files and store those in the add. However, when it is installed on a cluster splunk environment, it suddenly will not create the CSVs file and just do not download the files it was supposed to download. Any help or advise is welcome please. This is the add-on below. https://classic.splunkbase.splunk.com/app/7002/#/overview
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r... See more...
Hello, I have the following query to search Proofpoint logs.  index=ppoint_prod host=*host1* | eval time=strftime(_time, "%m-%d-%y %T") | rex "env_from\s+value=(?<sender>\S+)" | rex "env_rcpt\s+r=\d+\s+value=(?<receiver>\S+)" | stats first(time) as Date first(ip) as ConnectingIP first(reverse) as ReverseLookup last(action) last(msgs) as MessagesSent count(receiver) as NumberOfMessageRecipients first(size) as MessageSize1 first(attachments) as NumberOfAttachments values(sender) as Sender values(receiver) as Recipients first(subject) as Subject by s | where Sender!="" It provides what I need on a per message level. How would I modify this to get list of ConnectingIP and ReverseLookup values per Sender. If possible it would be nice to also get number of messages per sender, but it is not absolutely neccessarry. I understand I will need to drop from the query everything  that is message specific like Subject, NumberOfAttachments etc. I am looking to get something like this:   sender1@domain.com ConnectingIP_1 ReverseLookup_1   ConnectingIP_2 ReverseLookup_2 sender2@domain.com ConnectingIP_3 ReverseLookup_3                                                   
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics... See more...
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics every X minutes but that doesn't seem to be working, the metrics are only collected when I run the search manually.   Any suggestions to how I can make mcollect just automatically collect the metrics I am looking for ?   Thanks
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want ... See more...
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want the value and when it doesnt i want it to be blank and all records have mynextfield3 and i always want that as field3 i want rex these lines and end up with field1               field2              field3 thisisfield1    thisisfield2   mynextfield3 thisisfield1                              mynextfield3  
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [ht... See more...
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [http://hec-input] disabled=0 enableSSL=0 #useACK=true index=HEC source=HEC_Source sourcetype=_json token=2f5c143f-b777-4777-b2cc-ea45a4288677 Push these configuration to the peer-app (Indexers).   But we go to the Data inputs => HTTP Event Collector  at indexer Side we still found it as below:    
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging... See more...
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging area=" /opt/splunk/var/run/splunk/deply.#######.tmp/apps/my_app." 180 errors    They are all python files (.py) this is for every app that has python files as well. I have checked the permissions and Splunk owns the files. I am so confused as to what is happening. I understand this is a permission issue, but Splunk owns the files (rambling) with permissions set to rw on all python files on the deployer.  Also, splunk user owns all files under /opt/splunk    Any help greatly appreciated! 
Hello Team, I have forwarded syslogs to Splunk Enterprise, I am trying to find a way to create props.conf and transforms.conf such a way that Splunk ingests all the messages which matches the keywor... See more...
Hello Team, I have forwarded syslogs to Splunk Enterprise, I am trying to find a way to create props.conf and transforms.conf such a way that Splunk ingests all the messages which matches the keywords that I have defined in a regex in transforms.conf and drop all the non matching messages however I am not able to do the same. Is there a way to do that or does transforms and props.conf only work to drop the messages which are defined in the regex as currently if I try to that Splunk is dropping only the keywords that I defined and ingesting everything else. I am new to splunk so requesting some inputs for the same. Thanks in advance!!
I'm trying to regex the field that has "REPLY" CommonEndpointLoggingAspect {requestId=94f2a697-3c0d-4835-b96a-42be3d2426e2, serviceName=getCart} - REPLY 
How to Redirect Smart Agent Temporary Files to Avoid /tmp Space Limitations on Linux Installing any APM or Machine agent with Smart Agent on your Linux box will use /tmp directory to copy agent bin... See more...
How to Redirect Smart Agent Temporary Files to Avoid /tmp Space Limitations on Linux Installing any APM or Machine agent with Smart Agent on your Linux box will use /tmp directory to copy agent binaries before moving them to the intended directory. The problem You can get an error like below: error message = Error extracting Machine Agent in staging: reading file in zip archive: /tmp/.staging/machine-agent/jre/lib/modules: writing file: write /tmp/.staging/machine-agent/jre/lib/modules: no space left on device Error creating Machine Agent service: error installing service: error moving service file to destination: rename /tmp/.staging/appdynamics-machine-agent.service /etc/systemd/system/appdynamics-machine-agent.service: invalid cross-device link These errors are caused due to less space in /tmp folder or /tmp directory mounted on an external device. How to fix it You need to have Smart Agent use any other directory on our host then tmp. To do this: Go to <Smart-Agent-Home-Directory>, In my case the directory is /opt/appdynamics/appdsmartagent Commands in order: cd /opt/appdynamics/appdsmartagent ./smartagentctl stop export TMPDIR=/opt/appdynamics ./smartagentctl start Now in your logs, you will see {"severityText":"INFO","timestamp":"2024-10-04T16:30:29.692Z","name":"native","caller":"machine/task_helper.go:48","body":"downloaded file to ","downloaded file":"/opt/appdynamics/.staging/download/machineagent-bundle-64bit-linux-24.9.0.4408.zip"} {"severityText":"INFO","timestamp":"2024-10-04T16:30:29.692Z","name":"native","caller":"machine/task_helper.go:161","body":"Extracting zip","package.name":"8a5e85401b3a01ac5dadd6394c235dbf032ffa04;MACHINE_AGENT","src path":"/opt/appdynamics/.staging/download/machineagent-bundle-64bit-linux-24.9.0.4408.zip","dest path":"/opt/appdynamics/.staging/machine-agent"} This means, Smart Agent is now copying everything in /opt/appdynamics directory.
curl command :  curl -k -u  admin:Password -X POST http://127.0.0.1:8000/en-US/services/authorization/tokens?output_mode=json --data name=admin  --data audience=Users --data-urlencode expires_on=+3... See more...
curl command :  curl -k -u  admin:Password -X POST http://127.0.0.1:8000/en-US/services/authorization/tokens?output_mode=json --data name=admin  --data audience=Users --data-urlencode expires_on=+30d   But I am able to login via UI and create an access token.   If I try to do the same using curl command, I am getting the below response. Note: The response has been trimmed.     <div class="error-message"> <h1 data-role="error-title">Oops.</h1> <p data-role="error-message">Page not found! Click <a href="/" data-role="return-to-splunk-home">here</a> to return to Splunk homepage.</p> </div> </div> </div> <div class="message-wrapper"> <div class="message-container fixed-width" data-role="more-results"><a href="/en-US/app/search/search?q=index%3D_internal%20host%3D%22f6xffpvw93.corp.com%2A%22%20source%3D%2Aweb_service.log%20log_level%3DERROR%20requestid%3D6740cfffb611125b5e0" target="_blank">View more information about your request (request ID = 6740cfffb611125b5e0) in Search</a></div> <div class="message-container fixed-width" data-role="crashes"></div> <div class="message-container fixed-width" data-role="refferer"></div> <div class="message-container fixed-width" data-role="debug"></div> <div class="message-container fixed-width" data-role="byline"> <p class="byline">.</p> </div> </div> </body>
Need help to extract a field that comes after a certain word in a event.  I am looking to extract a field called "sn_grp" with the value of "M2 Infra Ops". So for every event that has sn_grp:  i w... See more...
Need help to extract a field that comes after a certain word in a event.  I am looking to extract a field called "sn_grp" with the value of "M2 Infra Ops". So for every event that has sn_grp:  i would like to extract the string that follows of "M2 Infra Ops". This string value will be the same name for every event. Below is an example data set i am using to write the regex to  \"sn_grp:M2 Infra Ops\"},{\"context\":\"CONTEXTLESS\",\"key\":\"Correspondence Routing Engine\
i need to run a script to check if a list of linux servers have splunk installed and the process name. any idea what the process name is or the installed directory? and if its forwarding to splunk co... See more...
i need to run a script to check if a list of linux servers have splunk installed and the process name. any idea what the process name is or the installed directory? and if its forwarding to splunk console?
Hello to everyone! I want to build a dashboard with which I can access information from config files of indexer cluster I know that the typical scenario to access config files is using REST endpoin... See more...
Hello to everyone! I want to build a dashboard with which I can access information from config files of indexer cluster I know that the typical scenario to access config files is using REST endpoints "/services/configs/conf-*" But as I understood, these endpoints show only configuration files stored under /system/local/*.conf Is it a way to access config files stored under /manager-apps/local ?
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configur... See more...
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configured Add-on for Unix and Linux. And what index will they appear in? Thanks! Inside /Applications/SplunkForwarder/etc/system/local i have: inputs.conf, outputs.conf, server.conf. inputs.conf     [monitor:///var/log/system.log] disabled = 0     outputs.conf     [tcpout:default-autolb-group] server = ip:9997 compressed = true [tcpout-server://ip:9997]     server.conf     [general] serverName = pass4SymmKey = [sslConfig] sslPassword = [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free        
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been require... See more...
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been required that all members of a cluster must have the same OS and version. I was thinking to simply add one new indexer (redhat 9 vm) at the time and dettach an old one forcinf the buckets count. So for a short-time the cluster would have members with different OS versions. Upgrading from Red-Hat 7 to Red.Had 9 directly in the splunk enviroment is not possible. I would like to know if there are critical issues to face while the migration is happening?  I hope the procedure won't last more than 2 days.
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table compon... See more...
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table component1 component2 these are the 2 tables. I want to show the extra data which are in component2 and not in component1. How can i do it?
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apa... See more...
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apache/2.4.6 and  server stats is working (http://localhost:8080/server-status?auto) I have referenced the follow document  and update the config file /etc/otle/collector/agent_config.yaml but I did not get any metrics about Apache !! https://docs.splunk.com/observability/en/gdi/opentelemetry/components/apache-receiver.html https://docs.splunk.com/observability/en/gdi/monitors-hosts/apache-httpserver.html Anybody kindly do me a favor to fix it thanks in adeavne #observability    
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicate... See more...
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicates a potential discrepancy in the timestamp parsing or configuration when handling live data. Could you please suggest me potential reson and cause? Additionally, it would be helpful to review the relevant props.conf configurations to ensure consistency   Sample data: {"@timestamp":"2024-11-19T12:53:16.5310804+00:00","event":{"action":"log","code":"10010","kind":"event","original":"Communication session on line {1:d}, lost.","context":{"parameter1":"12","parameter2":"2","parameter3":"6","parameter4":"0","physical_line":"12","connected_unit_type_code":"2","connect_logical_unit_number":"6","description":"A User Event message will be generated each time a communication link is lost. This message can be used to detect that an external unit no longer is connected.\nPossible Unit Type codes:\n2 Debug line\n3 ACI line\n4 CWay line","severity":"Info","vehicle_index":"0","unit_type":"NT8000","location":"0","physical_module_id":"0","event_type":"UserEvent","software_module_id":"26"}},"service":{"address":"localhost:50005","name":"Eventlog"},"agent":{"name":"ACI.SystemManager","type":"ACI SystemManager Collector","version":"3.3.0.0"},"project":{"id":"fleet_move_af_sim"},"ecs.version":"8.1.0"} Current props: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom #KV_MODE = json pulldown_type = 1 TIME_PREFIX = \"@timestamp\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%7N%:z mismatch timestamp Current results :   Note : I am using http event collector token to get the data into Splunk. Inputs and props settings are arranged under search app.