All Topics

Top

All Topics

I have this props.conf TIME is almost 6hrs off from the event time. Below is my props. [app_log] CHARSET=UTF-8 LINE_BREAKER=([\r\n]+)\d+\-\d+\-\d+\s\d+\:\d+\:\d+\w NO_BINARY_CHECK=true SHOULD_LI... See more...
I have this props.conf TIME is almost 6hrs off from the event time. Below is my props. [app_log] CHARSET=UTF-8 LINE_BREAKER=([\r\n]+)\d+\-\d+\-\d+\s\d+\:\d+\:\d+\w NO_BINARY_CHECK=true SHOULD_LINEMERGE=false disabled=false TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=^ TZ=US/Central   Sample log:-   This is event time which is ingesting fine "2023-11-14 10:59:58Z" 2023-11-14 10:59:58Z stevelog Closed Successfully 2023-11-14 10:59:58Z stevelog_close 2023-11-14 10:59:58Z Resetting CWD back from C:\WINDOWS\SysWOW64\inetsrv 2023-11-14 10:59:58Z Resetting CWD complete, back too C:\WINDOWS\SysWOW64\inetsrv 2023-11-14 10:59:58Z steveEngineMain Thread ====================> END   The actual TIME is 6hrs how than event time. Please find the attached screen and request you to let me know what the time difference.      
I'm creating an alert to notify oracle database administrators  when a  db_connect connection has failed. I have created the query to return the name of the failed connection using the splunk _intern... See more...
I'm creating an alert to notify oracle database administrators  when a  db_connect connection has failed. I have created the query to return the name of the failed connection using the splunk _internal logs. However, I would like to include the hostname and default database that are defined in the connection.  I have not been able to locate logs with the connection host and default database using the connection name as the search criteria. Is there a REST or CURL command available that retrieves the host and default database (using the connection name as input) that I can  use to join with my  completed query that retrieves failed connections? Thanks In Advance.  
Is the app (Cisco Secure eStreamer Client Add-On[https://splunkbase.splunk.com/app/3662]) even usable on splunkcloud? I can install it from the "browse more apps" page in the cloud app management are... See more...
Is the app (Cisco Secure eStreamer Client Add-On[https://splunkbase.splunk.com/app/3662]) even usable on splunkcloud? I can install it from the "browse more apps" page in the cloud app management area, but it seems i will not be able to set it up or use it, as 1) it requires you to edit a config file on disk; 2) it writes the data it retreives from Cisco to a local disk; 3) it is not possible to create a disk monitor in splunkcloud.  Only real option seems to be to use a heavy forwarder. Any suggestions?
Hi! Faced with a very specific problem. We use splunk enterprise 7.3.0. We have ru_RU written in the address bar instead of en-US. In the file /opt/splunk/etc/system/local/times.conf, we changed th... See more...
Hi! Faced with a very specific problem. We use splunk enterprise 7.3.0. We have ru_RU written in the address bar instead of en-US. In the file /opt/splunk/etc/system/local/times.conf, we changed the display language of the time input to Russian. When the Date & Time Range item is selected in the time input and the period is set in it by the Between button, the data is applied, but the input itself disappears from the dashboard. An error appears in the console: moment().splunkFormat() does not support the locale ru. If you use en_US instead of ru_RU in the address bar, the error does not occur, but it does not suit us. I tried adding the file ru.js to the locale folder, then splunk stops working. Please tell me how you can fix this error. Thanks!
Splunk Observability Cloud’s OpenTelemetry Insights page is now available for your GCP and Azure hosts to give you a more comprehensive view of your artifacts that capture metrics, traces and logs, a... See more...
Splunk Observability Cloud’s OpenTelemetry Insights page is now available for your GCP and Azure hosts to give you a more comprehensive view of your artifacts that capture metrics, traces and logs, as well as the state of the integration sources. This view is already available for AWS EC2 hosts.  The OpenTelemetry insights page, accessed via the Data Management module, gives users a complete view of the host’s inventory, including:  A full list of instances The deployment status of the OpenTelemetry Collector for each instance The version.  With this new level of insight into your data pipeline, you can better understand which parts of your applications and infrastructure still need to be instrumented, improve tracking of the deployment of their agents, and control versioning.  Try it today!
We’ve released new enhancements to the navigators within Splunk Infrastructure Monitoring (IM) to give users greater context and flexibility as they expand visibility and troubleshoot across their en... See more...
We’ve released new enhancements to the navigators within Splunk Infrastructure Monitoring (IM) to give users greater context and flexibility as they expand visibility and troubleshoot across their environments. What’s New? Navigator Customization  This new capability allows Observability Cloud admins to customize the dashboards displayed for a given IM navigator so users can quickly extend the built-in content to align with their unique business needs. Prior to this release, the IM navigators only linked to the standard built-in dashboards, which couldn’t be easily updated to show metrics that were most relevant to your business. Now admins can select the new “Manage navigator dashboards” option from the navigator screen in the UI to open the navigator settings view where they’ll be able to configure the dashboards displayed for that navigator, including:  Adding custom dashboards Hiding a built-in dashboard and  Reordering the dashboards in the navigator  Learn more >   Log Charts in Navigators Building on our recent enhancements to the Kubernetes navigator, this release delivers in-context troubleshooting with the logs-in-dashboards integration. With Kubernetes navigator, engineers get immediate visibility to their Kubernetes environment with automatic discovery and instrumentation through auto-telemetry. Direct integration of logs provides users with complete, in-place views of related data from logs to metrics to traces so you can easily monitor availability and troubleshoot problems without context switching. Learn more > 
Hello.  I have logs which contains field "matching" which is a String type. This field contains this kind of information: [firstName, lastName, mobileNumber, town, ipAddress, dateOfBirth, emailAddr... See more...
Hello.  I have logs which contains field "matching" which is a String type. This field contains this kind of information: [firstName, lastName, mobileNumber, town, ipAddress, dateOfBirth, emailAddress, countryCode, fullAddress, postCode, etc]. What I want to do is to compose a query that will return count of a specific search, such as [mobileNumber, countryCode] and display only the fields that contain the above words. I tried this query: index="source*" | where matching LIKE "%mobileNumber%" AND matchingLIKE "%countryCode%" | stats count by matching | table count matching But the answer returns all the possible variants that also contains [mobileNumber, countryCode]. What I want is a count only for all this results   Also I want to create a table with all specific searches I do. I know how to use append, but result is like a stairs, what other solution can be used? Than you!  
Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = Eve... See more...
Hello  We are trying to change the below blacklists:  blacklist3 = EventCode="4690"  blacklist4 = EventCode="5145" blacklist5 = EventCode="5156" blacklist6 = EventCode="4658" blacklist7 = EventCode="5158" To a single blacklist with multiple eventcodes. We have tried: blacklist3 = EventCode=5145,5156,4658,4690,5158 and blacklist3 = EventCode="5145" OR "5156" OR "4658" OR "4690" OR "5158" And none of these are applying and blocking out the event codes.    Any recommendations on how to get this to work?   
I had this search set up:   index=_internal source=*splunkd_ui_access.log /app NOT(user="-" OR uri_path="*/app/*/search")   To be able to audit dashboard usage. After updating to 9.1.1 there were... See more...
I had this search set up:   index=_internal source=*splunkd_ui_access.log /app NOT(user="-" OR uri_path="*/app/*/search")   To be able to audit dashboard usage. After updating to 9.1.1 there were very limited numbers of events matching this search. After a bit of digging it seems that what used to be   "GET /en_US/app/<appname>/<dashboard> HTTP/1.1"   is no longer there and the '/app' URI part no longer points to dashboards. I can find the dashboards accessed instead as   "GET /en-US/splunkd/__raw/servicesNS/<user>/<dashboard>/data/ui/<lots>/<more>   As best as I can see, the information I am interrested in seems to now reside in the "web_access.log" instead, which previously contained a lot more information (like the __raw log now). The events in this log file looks like this: "GET /en-GB/app/<app>/<dashboard> HTTP/1.1"   So I need to modify the original search to exclude launcher and a different pattern for search etc. My question is if this is the correct and optimal approach, to work with the "web_access.log" instead of the now seemingly harder to work with "splunkd_ui_access.log". Or should I be looking at some other source or in some other way?
Hello, We had this error on an output query set-up on Splunk DB Connect. Basically the Splunk query is inserting data into an external database.     2023-11-08 01:58:32.712 +0100 [QuartzSchedul... See more...
Hello, We had this error on an output query set-up on Splunk DB Connect. Basically the Splunk query is inserting data into an external database.     2023-11-08 01:58:32.712 +0100 [QuartzScheduler_Worker-9] ERROR org.easybatch.core.job.BatchJob - Unable to read next record java.lang.RuntimeException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[836463,5] Message: Premature EOF at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:128) at com.splunk.ResultsReader.getNextElement(ResultsReader.java:87) at com.splunk.ResultsReader.getNextEvent(ResultsReader.java:64) at com.splunk.dbx.server.dboutput.recordreader.DbOutputRecordReader.readRecord(DbOutputRecordReader.java:82) at org.easybatch.core.job.BatchJob.readRecord(BatchJob.java:189) at org.easybatch.core.job.BatchJob.readAndProcessBatch(BatchJob.java:171) at org.easybatch.core.job.BatchJob.call(BatchJob.java:101) at org.easybatch.extensions.quartz.Job.execute(Job.java:59) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[836463,5] Message: Premature EOF at com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:599) at com.sun.xml.internal.stream.XMLEventReaderImpl.nextEvent(XMLEventReaderImpl.java:83) at com.splunk.ResultsReaderXml.getResultKVPairs(ResultsReaderXml.java:306) at com.splunk.ResultsReaderXml.getNextEventInCurrentSet(ResultsReaderXml.java:124) ... 9 common frames omitted     The issue was related to a query timeout. We have set-up the upsert_id in the Splunk DB Connect output configuration so that Splunk can go in insert_update. Looking into _internal log we understood that Splunk, when using the upsert_id, performs a select query for each record it has to insert and then commits every 1000 records (by default):   2023-11-10 01:22:28.215 +0100 [QuartzScheduler_Worker-12] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=dboutput connection_name=SPLUNK_CONN stanza_name=SPLUNK_OUTPUT state=success sql='SELECT FIELD01,FIELD02,FIELD03 FROM MYSCHEMA.MYTABLE WHERE UPSERT_ID=?'       2023-11-10 01:22:28.258 +0100 [QuartzScheduler_Worker-12] INFO com.splunk.dbx.connector.logger.AuditLogger - operation=dboutput connection_name=SPLUNK_CONN stanza_name=SPLUNK_OUTPUT state=success sql='INSERT INTO MYSCHEMA.MYTABLE (FIELD01,FIELD02,FIELD03) values (?,?,?)'     Upsert_id is very useful to avoid an sql duplicate key error, and whenever you want to recover data in case the output is failing for some reason. You basically re-run the output query and if the record already exists it is replaced in the sql table. But the side effect is that the WHERE condition of the SELECT statement can be very inefficient if the Database table start to be huge. The solution is to create in the output Database table an SQL index on the upsert_id field.   The output run passed from 11 minutes to 11 seconds, avoiding to hit the timeout of the Splunk DB Connect (30 seconds by default, calculated for every commit).   Best Regards, Edoardo
I am trying to Install event services from Enterprice Console But don't know this error how to handle it  This is Error : Task failed: Starting the Events Service api store node on host: newMachine... See more...
I am trying to Install event services from Enterprice Console But don't know this error how to handle it  This is Error : Task failed: Starting the Events Service api store node on host: newMachineAp as user: root with message: Connection to [<a href="<a href="http://newMachineAp:9080/_ping" target="_blank">http://newMachineAp:9080/_ping</a>" target="_blank"><a href="http://newMachineAp:9080/_ping" target="_blank">http://newMachineAp:9080/_ping</a></a>] failed due to [Failed to connect to newmachineap/192.168.27.211:9080].
Please check the above ss , how can i convert count column in gb , and how am able to know which measurement of count is.
Hello everyone, I am encountering an issue with the Alert Manager Enterprise application; following the triggering of an alert, no event is created in my dedicated index. The status of the health... See more...
Hello everyone, I am encountering an issue with the Alert Manager Enterprise application; following the triggering of an alert, no event is created in my dedicated index. The status of the health check is okay, and we are able to create test events:    Another point to note is that in the application's troubleshooting logs, when an alert is triggered, the event creation occurs but nothing is created in the index: There are no permission issues, as I have confirmed by manually writing a search that we can create events in the index: | makeresults | eval user="TEST", src="192.168.0.1", action="create test event" | sendalert create_alert param.title="Hello $result.user$" param.template=default This successfully creates my event in my index. I have exhausted my troubleshooting ideas, do you have any suggestions on how to resolve this issue? Thank you for your help. MCH
Does anyone login to Splunk Education page and sees that the user record does not exist?   I have tried clear cache, incognito mode, using on my phone, it does not work.   I can reset my password... See more...
Does anyone login to Splunk Education page and sees that the user record does not exist?   I have tried clear cache, incognito mode, using on my phone, it does not work.   I can reset my password, which means the record does exist. I have purchased courses on Splunk Education and need to retrieve the receipts to claim with my organisation.     
Hi, I implemented an input filter, but i want to improve it. Customers want to select multiple values from the filter and then select more values. in the current situation they need to select 'All... See more...
Hi, I implemented an input filter, but i want to improve it. Customers want to select multiple values from the filter and then select more values. in the current situation they need to select 'All ' and then select the values again (each time they want to add values they need to select All-->select values-->remove All) in addition, they want to select all values except one value, which currently takes time to do. Is there a smarter filter input in Splunk? my code: <input type="multiselect" token="WinTimeStamp" searchWhenChanged="true"> <label>Time</label> <choice value="%">All</choice> <default>%</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>(WinTimeStamp like("</valuePrefix> <valueSuffix>"))</valueSuffix> <delimiter> OR </delimiter> <fieldForLabel>WinTimeStamp</fieldForLabel> <fieldForValue>WinTimeStamp</fieldForValue> <search> <query> | where $Name$ | dedup WinTimeStamp | sort WinTimeStamp</query> </search> </input> Thanks, Maayan
Hi All, Trying to create a dashboard in studio with dynamic coloring elements for single value searches (ie traffic lights) When I go to the Coloring > Dynamic Elements dropdown to select Background... See more...
Hi All, Trying to create a dashboard in studio with dynamic coloring elements for single value searches (ie traffic lights) When I go to the Coloring > Dynamic Elements dropdown to select Background it will not select. A click just has it disappear and show none on the dropdown. Search works fine and displays the correct value. Running enterprise on prem version 9.0.0 BUILD 6818ac46f2ec. Have not found anything on the net or here to suggest this version has an issue. Looking to update to 9.1.1 but as this is production that is a planned exercise. Search is  index=SEPM "virus found" |stats count(message) as "Infected hosts" and I have this traffic light in the normal dashboard build. Just trying studio see how it is.     Any help appreciated!
Hi Team, At present, SSL encryption is enabled between the Universal Forwarder (UF) and the Heavy Forwarder (HF), while communication from HF to Indexers occurs without SSL encryption. However, ther... See more...
Hi Team, At present, SSL encryption is enabled between the Universal Forwarder (UF) and the Heavy Forwarder (HF), while communication from HF to Indexers occurs without SSL encryption. However, there are plans to establish an SSL channel between the HF and Indexers in the future. Additionally, communication between Indexers and the License Master, as well as between HF and the License Master, currently operates through non-SSL channels. There is a requirement to transition these communications to SSL-enabled connections. Could you provide guidance or documentation outlining the necessary implementation steps for securing the communication from Indexers & HF to License Master to facilitate these changes?
My dataset has historical monthly average temperature for years 1745 to 2013. Since my source is a csv file, I used the following so the that the _time field represents the timestamp in each event : ... See more...
My dataset has historical monthly average temperature for years 1745 to 2013. Since my source is a csv file, I used the following so the that the _time field represents the timestamp in each event :   source="Global warming trends.zip:*" source="Global warming trends.zip:./GlobalLandTemperaturesByMajorCity.csv" Country=Canada City=Montreal dt=*-01-* AverageTemperature="*" | eval _time=strptime(dt,"%Y-%m-%d")   However, all the events dated 1970 and prior don't have their timestamp in the 'Time' column, as per the attached capture. I suspect this has do do with Epoch time, but how do I fix this so I can vizualize my entire data set in a line chart?
we had a vendor setup a Splunk instance for us a while ago and one of the things they did was setup a Brute Force attack alert using the following search, | tstats summariesonly=t allow_old_summarie... See more...
we had a vendor setup a Splunk instance for us a while ago and one of the things they did was setup a Brute Force attack alert using the following search, | tstats summariesonly=t allow_old_summaries=t count from datamodel=Authentication by Authentication.action, Authentication.src | rename Authentication.src as source, Authentication.action as action | chart last(count) over source by action | where success>0 and failure>20 | sort -failure | rename failure as failures | fields - success, unknown Now this seems to work OK as I'm getting regular alerts, but these alerts contain little if any detail. Sometimes they contain a server name, so I've checked that server. I can see some failed login attempts on that server, but again, not detail. No account details, not IPs, no servers names. It may be some sort of scheduled task as i get an alert from Splunk every hour and every time it has about the same number of Brute Force attacks (24). But I can't see any scheduled tasks that may cause this. Does anyone have any suggestions on how to track down what is causing these false alerts ?
Im trying to get specific results if two values in the same field are true but I keep failing I want to count the number of times a  sc_action=REQ_PASSED when sc_action=REQ_CHALLENGE_CAPTCHA was req... See more...
Im trying to get specific results if two values in the same field are true but I keep failing I want to count the number of times a  sc_action=REQ_PASSED when sc_action=REQ_CHALLENGE_CAPTCHA was required   I tried this : My search | eval activity=if(IN(sc_action, "REQ_CHALLENGE_CAPTCHA", "REQ_PASSED")"passed","captcha") | stats count by activity I tried if/where and evals, I either get get an error or I get all the results where both are true. Maybe im overthinking it