All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi here https://docs.splunk.com/Documentation/Splunk/9.0.0/Installation/HowtoupgradeSplunk is upgrade path from 7.3.3 to 9.0.x. Based on that you should 1st update to 8.1.x then you could go to 9.0.... See more...
Hi here https://docs.splunk.com/Documentation/Splunk/9.0.0/Installation/HowtoupgradeSplunk is upgrade path from 7.3.3 to 9.0.x. Based on that you should 1st update to 8.1.x then you could go to 9.0.x r. Ismo
How did you make the change?  If you used the GUI then it's possible the change never propagated (I've heard rumors about this happening).  Config changes on Splunk Cloud should be made in an app whi... See more...
How did you make the change?  If you used the GUI then it's possible the change never propagated (I've heard rumors about this happening).  Config changes on Splunk Cloud should be made in an app which you then upload and install.
Don't let the SMTP server go off-line.  There is no Splunk feature for that.  Perhaps your mail admin can offer a solution, maybe using redundant servers.
Computer terms can be confusing since they often have several meanings. "source" is where the data comes from.  In Splunk metadata, the source is the name of the file from which the data originated.... See more...
Computer terms can be confusing since they often have several meanings. "source" is where the data comes from.  In Splunk metadata, the source is the name of the file from which the data originated.  "source" can also refer to the originating server or app. "endpoint" usually refers to a user workstation, but a specific REST command is also an endpoint.
hi @gcusello  I will be accessing the pc's, since I will need to download Universal forwarder in each of them. I don't use WMI to access the pc's but rather physically access them.
Assuming that you configured your HEC port to be 8088, that URL looks correct to me.  Also, that endpoint expects JSON data - is that what Blue Prism is sending? There are several REST endpoints y... See more...
Assuming that you configured your HEC port to be 8088, that URL looks correct to me.  Also, that endpoint expects JSON data - is that what Blue Prism is sending? There are several REST endpoints you can post to for the Splunk HEC documented here. Also, here is a page with some curl commands you can use to test out the HEC endpoint locally to rule out where the issue is - that is, if it works with curl, then is there a network issue in between (e.g. firewall). HTTP Event Collector examples - Splunk Documentation       https://mysplunkserver.example.com:8088/services/collector  
Hi @QuantumRgw , how do you think that's possible to monitor a system without accessing it? Have you Domain credentials to access pcs using WMI? your question is too vague. Ciao. Giuseppe 
Hi again @gcusello  Thank you for your information. My priority is to monitor the pc's. It would be great if you can let me know whether if it is possible to check https without checking the serve... See more...
Hi again @gcusello  Thank you for your information. My priority is to monitor the pc's. It would be great if you can let me know whether if it is possible to check https without checking the server but with directly getting information from the pc's. I've tried to look through my laptop's data logs but I don't think I can access to them.  It would be great if there are step-by-step explanation of it. thank you in advance
Sorry about the late update. This gives me earliest events' _time for all the selected indexes. I still have to filter out those that have been created in my selected time range which seems doable a... See more...
Sorry about the late update. This gives me earliest events' _time for all the selected indexes. I still have to filter out those that have been created in my selected time range which seems doable as below. But for some reason running this isn't giving be the answer I want.  Just like a join wouldn't work for  index=*  as opposed to an individual index. I can't explain what's happening.   | tstats min(_time) as earliest_event where earliest=-6mon latest=now [search index=_internal source=/opt/splunk/var/log/splunk/cloud_monitoring_console.log* TERM(logResults:splunk-ingestion) earliest=-30d latest=now | rename data.* as * | fields idx | rename idx as index] by index | eval cutoff = relative_time(earliest_event,"-30d") | where earliest_event>cutoff    
I should be more specific.  I do not see this option on splunk enterprise as of version 9.05.  I will need to check enterprise 9.07, which is the latest as of this reply.  I really wish they would wo... See more...
I should be more specific.  I do not see this option on splunk enterprise as of version 9.05.  I will need to check enterprise 9.07, which is the latest as of this reply.  I really wish they would work more on Dashboard Studio because it is a very clean presentation.  
Apparently you can only do this on splunk cloud.  I don't see this option as of 9.05. https://www.splunk.com/en_us/blog/tips-and-tricks/dashboard-studio-show-or-hide-the-latest-features-in-splunk-cl... See more...
Apparently you can only do this on splunk cloud.  I don't see this option as of 9.05. https://www.splunk.com/en_us/blog/tips-and-tricks/dashboard-studio-show-or-hide-the-latest-features-in-splunk-cloud-platform-9-0-2303.html
Hello All, Do we have any method or workaround to export results of trellis layout in the visualization of dashboard to exported PDF? Any suggestions, inputs will be very helpful. Thank you Tar... See more...
Hello All, Do we have any method or workaround to export results of trellis layout in the visualization of dashboard to exported PDF? Any suggestions, inputs will be very helpful. Thank you Taruchit
Hi @QuantumRgw , your requirement is very larger than I thought: you have to monitor the server, the pcs and the network to identify eventual threats. When you'll have all these logs, you have to ... See more...
Hi @QuantumRgw , your requirement is very larger than I thought: you have to monitor the server, the pcs and the network to identify eventual threats. When you'll have all these logs, you have to identify the possible variation or threats, it isn't an immediate job, you shoudl better analyze your perimeter (definying all the logs to use) and identify all the Use Cases to implement. Ciao. Giuseppe
Hi @AL3Z , as I said, with my search you have the list of all data flows (sourcetypes) for each endpoint (host). Ciao. Giuseppe
Hello @gcusello  I would like to monitor the log data from the pc's in the business firm where the pc's are connected to the server. I am planning to install universal forwarder to each pc and forwa... See more...
Hello @gcusello  I would like to monitor the log data from the pc's in the business firm where the pc's are connected to the server. I am planning to install universal forwarder to each pc and forward it to a Host pc in the firm. I want to monitor if there is an out of ordinary events. These range through if simple pc activity monitoring like https, security events, brute force attacks etc.. I don't know whether monitoring the server would be the https logs of the pc's or single installation of each one and forwarding it will give it to me. thank you for the (https://splunkbase.splunk.com/app/3435)  security essentials,
I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-... See more...
I have a custom sourcetype that has the following advanced setting: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app  (?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   I updated the regex to be slightly less restrictive about the white-space following the "_app" portion: Name/Value EXTRACT-app : EXTRACT-app field extraction/^(?P<date>\w+\s+\d+\s+\d+:\d+:\d+)\s+(?P<host>[^ ]+) (?P<service>[a-zA-Z\-]+)_app\s+(?P<level>\w+)⏆(?P<controller>[^⏆]*)⏆(?P<thread>[^⏆]*)⏆((?P<flowId>[a-z0-9]*)⏆)?(?P<message>[^⏆]*)⏆(?P<exception>[^⏆]*)   (So instead of matching on two-spaces exactly following `_app` we match on one or more white-spaces.) After saving this change, it appears Splunk cloud still uses the previous regex. (Events that include only a single space after "_app" don't get their fields extracted.) I thought perhaps I needed to wait a little while for the change to propagate, but I made the change yesterday and it still doesn't extract the fields today. Is there anything else I need to do to have the regex change take effect?
@gcusello @richgalloway  Please correct me if I'm mistaken, but the source is where the data begins, while the endpoint acts as the destination or host where the data is either stored or received.
Thank you @richgalloway  Is there a way to avoid losing alerts generated during the smtp server offline period?   Thank you, Andrea
Hi @marco_carolo , you should extract all the fields and then correlate them: <your_search> | rex "[^\[]*\[(?<extracted_pid>[^\]]*)\]\s*\[(?<extracted_job_name>[^\]]*)\]\s*\[(?<extracted_index>[^\]... See more...
Hi @marco_carolo , you should extract all the fields and then correlate them: <your_search> | rex "[^\[]*\[(?<extracted_pid>[^\]]*)\]\s*\[(?<extracted_job_name>[^\]]*)\]\s*\[(?<extracted_index>[^\]]+\]\s*)(?<msg>.*)" | stats earliest(_time) AS earliest latest(_time) AS latest BY talend_job_name | eval duration=latest-earliest, earliest=strftime(earliest,"%Y-%m-%d %H:%M:%S"), latest=strftime(latest,"%Y-%m-%d %H:%M:%S") | table talend_job_name earliest latest duration Ciao. Giuseppe
If the SMTP server is not available then email messages will not be sent.  There is no queueing of emails.