All Topics

Top

All Topics

I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community... See more...
I have been trying to monitor a SQLite database, and have been having nothing but problems. I managed to find some stanzas that apparently worked for other people, notably this one: https://community.splunk.com/t5/All-Apps-and-Add-ons/Monitor-SQLite-database-file-with-Splunk-DB-Connect/m-p/294331   I am actually able to see the driver in the installed drivers tab, and I can see my stanza within possible connections when trying to test a query:   I used exactly what was in that previous question and that didn't work, and I tried several other changes, and currently have this: db_connection_types.conf: [sqlite] displayName = SQLite serviceClass = com.splunk.dbx2.DefaultDBX2JDBC jdbcDriverClass = org.sqlite.JDBC jdbcUrlFormat = jdbc:sqlite:<database> ui_default_catalog = main database = main port = 443 db_connections.conf: [incidents] connection_type = sqlite database = /opt/tece/pb_data/data.db host = localhost identity = owner jdbcUrlFormat = jdbc:sqlite:<database> jdbcUseSSL = 0 I am getting this error now:   I also see this in the logs: 2024-12-19 14:38:59.018 +0000 Trace-Id= [dw-36 - GET /api/inputs] INFO c.s.d.s.dbinput.task.DbInputCheckpointFileManager - action=init_checkpoint_file_manager working_directory=/opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect 2024-12-19 14:39:15.807 +0000 Trace-Id=6dac40b0-1bcc-4410-bc28-53d743136056 [dw-40 - GET /api/connections/incidents/status] WARN com.splunk.dbx.message.MessageEnum - action=initialize_resource_bundle_files error=Can't find bundle for base name Messages, locale en_US I have tried 2 seperate SQLite drivers, the most up to date one, and the one specifically for the version of the database of SQLite that I am using. Anyone have any ideas?
Hi i have a below query where I'm calculating the total prod server count in first dataset and in second dataset I'm plottting a timechart for the server count. what i want to display is a line chart... See more...
Hi i have a below query where I'm calculating the total prod server count in first dataset and in second dataset I'm plottting a timechart for the server count. what i want to display is a line chart with total prod server showing as threshold and line and the below line chart as server count index=data sourcetype="server" | rex field=_raw "server=\"(?<EVENT_CODE>[^\"]*)" | search [ | inputlookup prodata_eventcode.csv | fields EVENT_Code ] | stats dc(host_name) as server_prod_count |rename | append [ | search index=appdata source=appdata_value | rex field=value "\|(?<Item>[^\|]+)?\|(?<EVENT_CODE>[^\|]+)|(?<PROD_Count>[^\|]+)?" | dedup DATE,EVENT_CODE | timechart span=1d sum(PROD_Count) as SERVER_COUNT] | table _time,local_PROD_COUNT,snow_prod_count | rename DYNA_PROD_COUNT as SERVER_COUNT,snow_prod_count as Threshold Question is how can  i get the threshold value in all the rows so that i can plot threshold vs server count in the line graph  Below is the snapshot   
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same i... See more...
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same index but each has it own sourcetype. File copy location events has logs for each file but Target location has a logs which has multiple files names. Log format of filecopy location: 2024-12-18 17:02:50 , file_name="XYZ.csv",  file copy success  2024-12-18 17:02:58, file_name="ABC.zip", file copy success  2024-12-18 17:03:38, file_name="123.docx", file copy success 2024-12-18 18:06:19, file_name="143.docx", file copy success Log format of Target Location: 2024-12-18 17:30:10 <FileTransfer status="success>                                               <FileName>XYZ.csv</FileName>                                              <FileName>ABC.zip</FileName>                                              <FileName>123.docx</FileName>                                                </FileTransfer> Desired result:       File Name                  FileCopyLocation               Target Location       XYZ.csv                  2024-12-18 17:02:50          2024-12-18 17:30:10       ABC.zip                   2024-12-18 17:02:58          2024-12-18 17:30:10       123.docx                2024-12-18 17:03:38          2024-12-18 17:30:10        143.docx               2024-12-18 18:06:19            Pending I want to avoid join.
What protocols does the Windows Add on use to collect data and sent it to the Splunk server? HTTPS?
Hello there. I would like to ask about Splunk best practices, specifically regarding cluster architecture. One suggested practice is to configure all Splunk servers running Splunk Web (aka: a search ... See more...
Hello there. I would like to ask about Splunk best practices, specifically regarding cluster architecture. One suggested practice is to configure all Splunk servers running Splunk Web (aka: a search head) as members of the indexer cluster, (at least that is what I hear from the architecture lesson). For example, there is a Splunk deployer. I need to use this command or achieved through web: splunk edit cluster-config -mode searchhead -manager_uri https://x.x.x.x:8089 (indexer cluster manager IP) -secret idxcluster Another one suggested practice is adding the Splunk servers (mention above such as deployers) to distributed search > search peers as well in manager. I would like to know why these are good practice and what are the benefits of doing these. (The deployer is not really a search head?) Thank you.
I want to increase one of my index frozen Time Period from 12 months to 13 months. I have increased the Max Size of Entire Index from the Splunk indexer > Settings. But I know this is not enough as m... See more...
I want to increase one of my index frozen Time Period from 12 months to 13 months. I have increased the Max Size of Entire Index from the Splunk indexer > Settings. But I know this is not enough as my index frozen Time Period is set on 12 months period. So where should I update this value ? Should I need to update 'Indexes.conf' file for required indexes to the indexer server itself which is installed on Linux machine. What things I need to take care while updating this frozen Time Period.    
How High is the Incoming Data Volume for Monitoring ??? Where are the Data stored ?
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm... See more...
Hello Everyone, I'm currently exploring the Splunk Observability Cloud to send log data. From the portal, it appears there are only two ways to send logs: via Splunk Enterprise or Splunk Cloud. I'm curious if there's an alternative method to send logs using the Splunk HTTP Event Collector (HEC) exporter. According to the documentation here, the Splunk HEC exporter allows the OpenTelemetry Collector to send traces, logs, and metrics to Splunk HEC endpoints, supporting traces, metrics, and logs. Is it also possible to use fluentforward, otlphttp, or signalfx or anything else for this purpose? Additionally, I have an EC2 instance running the splunk-otel-collector service, which successfully sends infrastructure metrics to the Splunk Observability Cloud. Can this service also facilitate sending logs to the Splunk Observability Cloud? According to the agent_config.yaml file provided bysplunk-otel-collector service, there are several pre-configured service settings related to logs, including logs/signalfx, logs/entities, and logs. These configurations utilize different exporters such as splunk_hec, splunk_hec/profiling, otlphttp/entities, and signalfx. Could you explain what each of these configurations is intended to do?   service: extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [otlphttp, signalfx] # Use instead when sending to gateway #exporters: [otlp/gateway, signalfx] metrics: receivers: [hostmetrics, signalfx, statsd] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx, statsd] # Use instead when sending to gateway #exporters: [otlp/gateway] metrics/internal: receivers: [prometheus/internal] processors: [memory_limiter, batch, resourcedetection, resource/add_mode] # When sending to gateway, at least one metrics pipeline needs # to use signalfx exporter so host metadata gets emitted exporters: [signalfx] logs/signalfx: receivers: [signalfx, smartagent/processlist] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] logs/entities: # Receivers are dynamically added if discovery mode is enabled receivers: [nop] processors: [memory_limiter, batch, resourcedetection] exporters: [otlphttp/entities] logs: receivers: [fluentforward, otlp] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [splunk_hec, splunk_hec/profiling] # Use instead when sending to gateway #exporters: [otlp/gateway]   Thanks!
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or someth... See more...
Hi Everyone,  I was create my own lab for learning to configure best practice for Windows.  Then i create 1 Windows VM and doing scan in local (127.0.0.1) to get any information like port or something else. But unfortunately when it trigger i can't see anything like the result. Maybe i need to config something in my Windows or Something ? 
Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that p... See more...
Heya Splunk Community folks, In an attempt to make a fairly large table in DS readable, I was messing around with fontSize, and I noted that the JSON parser in the code editor was telling me that pattern: "^>.*" is valid for the property: options.fontSize. Is that actually enabled in DS, does anyone know? In other words, can I put a selector/formatting function in (for example, formatByType) and have the fontSize selected based on whether the column is a number or text type? If so, what's the syntax for the context definition? For example, is there a way to make this work? "fontSize": ">table | frameBySeriesTypes(\"number\",\"string\") | formatByType(fontPickerConfig)" (If not, there should be!) Thanks!
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1"... See more...
A data model is created with root search dataset and is set to acceleration as well. rootsearchquery1 : index=abc sourcetype=xyz field_1="1" rootsearchquery2 : index=abc sourcetype=xyz field_1="1" | fields _time field_2 field_3 For both the queries, auto extracted fields are added. ( _time, field_2, field_3). These are general questions for better understanding,  I would like to get suggestions in which scenario which usage (tstas, datamodel, root event , root search with streaming command, root search without streaming command) is preferrable? 1. |datamodel datamodelname datasetname | stats count by field_3 For Query 1, the output is pretty fast just below 10 seconds. (root search with out streaming command) For Query 2, the output is more than 100 seconds. (root search with streaming command) 2. For Query 2, tstats command is also taking more than 100 seconds and only giving results when added summariesonly=false, why is it not giving results when summariesonly=true is added? For Query 1, it works both summariesonly=false and true , and the output is pretty fast less than 2 seconds actually. So, in what scenario is it mentioned that streaming commands in root search can be added and accerlated, when in return it is querying by adding fields twice which is becoming more inefficient? eg : This is for Query 2 | datamodel datamodelname datasetname | stats count by properties.ActionType underlying query that is running : (index=* OR index=_*) index=abc sourcetype="xyz" field_1="1" _time=* DIRECTIVES(READ_SUMMARY(datamodel="datamodelname.datasetname" summariesonly="false" allow_old_summaries="false")) | fields "_time" field_2 field_3 | search _time = * | fields "_time" field_2 field_3 | stats count by properties.ActionType 3. And in general what is recommended - when a datamodel is accerlated, using either of them | datamodel or | tstats gives better performance. - when a datamodel is not accerlated, using | tstats only gives better performance.  Is this correct? 4. And when a datamodel is not accerlated, the command | datamodel pulls the data from _raw buckets, then what is the use of querying the data using datamodel instead of direct index? When the performance is same? 5. And while querying | datamodel datamodelname datasetname why is splunk by default adding ( index=* and index=_*)? It can be changed?
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same i... See more...
I am trying to track file transfers from one location to another.  Flow: Files are copied to File copy location -> Target Location Both File copy location and Target location logs are in the same index but each has it own sourcetype. File copy location events has logs for each file but Target location has a logs which has multiple files names. Log format of filecopy location: 2024-12-18 17:02:50 , file_name="XYZ.csv",  file copy success  2024-12-18 17:02:58, file_name="ABC.zip", file copy success  2024-12-18 17:03:38, file_name="123.docx", file copy success 2024-12-18 18:06:19, file_name="143.docx", file copy success Log format of Target Location: 2024-12-18 17:30:10 <FileTransfer status="success>                                               <FileName>XYZ.csv</FileName>                                              <FileName>ABC.zip</FileName>                                              <FileName>123.docx</FileName>                                                </FileTransfer> Desired result:       File Name                  FileCopyLocation               Target Location       XYZ.csv                  2024-12-18 17:02:50          2024-12-18 17:30:10       ABC.zip                   2024-12-18 17:02:58          2024-12-18 17:30:10       123.docx                2024-12-18 17:03:38          2024-12-18 17:30:10        143.docx               2024-12-18 18:06:19            Pending   Since events are in the same index and more events I do not  want use join.
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES... See more...
Hello everyone! I most likely could solve this problem if given enough time, but always seem to never have enough .  Within Enterprise security we pull asset information via LDAPsearch into our ES instance hosted in Splunk Cloud. Within the cn=* field, multiplies for both IP and hostnames. We aim for host fields to be either hostname or nt_host. some of these values though are written as such: cn=192_168_1_1   I want to evaluate the existing field and output them as normal decimals when seen. I am assuming I would need an if statement keeping intact hostname values while else performing the conversion. I am not at computer right now but will update with some data and my progress thus far.   Thanks!  
December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation. Oh, but this month we’ve got a fun holiday edition. It... See more...
December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation. Oh, but this month we’ve got a fun holiday edition. It’s our way of wrapping up the year and sharing our thanks to you for being the best community of users and learners on the planet. Until next year, we leave you with this Splunky rendition of an old holiday classic. The 12 Days of Splunk-mas   On the first day of Splunk-mas, my true love gave to me   ~ A Catalog of Splunk Classes for Free ~ Learn anywhere, anytime – for free   **************************   On the second day of Splunk-mas, my true love gave to me  ~ Two Ways to Learn it ~ Instructor-led and self-paced classes **************************   On the third day of Splunk-mas, my true love gave to me ~ Three Class Champions ~  Get to know our course instructors **************************   On the fourth day of Splunk-mas, my true love gave to me ~ Four Smartness Stories ~ Read interviews with inspiring Splunk users **************************   On the fifth day of Splunk-mas, my true love gave to me ~ Five Golden Badges ~ Validate your expertise with Splunk Certification badges **************************   On the sixth day of Splunk-mas, my true love gave to me ~ Six Ways It’s Proven~ Discover how proficiency in Splunk has career benefits **************************   On the seventh day of Splunk-mas, my true love gave to me ~ Seven Experts Sharing ~   Discover use cases, product tips, and expert guidance on Splunk Lantern **************************   On the eighth day of Splunk-mas, my true love gave to me: ~ Eight Labs a-Launching ~   Enroll in instructor-led and self-paced courses with hands-on labs **************************   On the ninth day of Splunk-mas, my true love gave to me ~ Nine Sophomores SOC’ing ~      Splunk Academic Alliance is preparing the next generation through university training **************************   On the tenth day of Splunk-mas, my true love gave to me ~ Ten ALPs a Teaching ~  Authorized Learning Partners (ALPs) across the globe provide localized learning **************************   On the eleventh day of Splunk-mas, my true love gave to me ~  Eleven Courses Releasing ~ Enroll in a new course today  **************************   On the twelfth day of Splunk-mas, my true love gave to me ~ Twelve Hands-a-Keying ~ Attend Splunk .conf25 to get hands-on-keyboard learning **************************   Thanks for sharing a few minutes of your day with us and this special holiday edition of the indexEducation newsletter. See you next year!   Answer to Index This: A Splunky rendition of a traditional holiday classic.
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewa... See more...
I currently have 2 different tables where the first one shows the number of firewalls each location has (WorkDay_Location) from an inventory lookup file, and a second table that shows how many firewalls are logging to splunk through searching the firewall indexes to validate they are logging. I would like to combine them, and have a 3rd column that shows the difference.  I run into problems with multisearch since I am using a lookup (via inputlookup), and another lookup where I search for firewalls by hostname, and if the hostname contains a certain naming convention it matches the hostname to a lookup file with the hostname to WorkDay_Location. FIREWALLS FROM INVENTORY - by Workday Location | inputlookup fw_asset_lookup.csv | search ComponentCategory="Firewall*" | stats count by WorkDay_Location FIREWALLS LOGGING TO SPLUNK - by Workday Location index=firewalls OR index=alerts AND host="*dmz-f*" | rex field=host "(?<hostname_code>[a-z]+\-[a-z0-9]+)\-(?<device>[a-z]+\-[a-z0-9]+)" | lookup device_sites_master_hostname_mapping.csv hostname_code OUTPUT Workday_Location | stats dc(host) by Workday_Location | sort Workday_Location Current output: Table 1: Firewalls from Inventory Search WorkDay_Location   count Location_1                   5 Location_2                   5 Table 2: Firewalls Logging to Splunk search WorkDay_Location  count Location_1                  3 Location_2                  5 Desired output WorkDay_Location      FW_Inventory      FW_Logging      Diff Location_1                      5                                 3                            2 Location_2                      5                                 5                            0 Appreciate any help if this is possible.
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with s... See more...
Hello, I just started using the new Dashboard Studio at work and I am having a few problems. For one, with classic dashboards I was able to share my input configuration for drop downs and such with someone else by sharing the current url, however the url does not seem to contain the information this time, and every time I share or reload it reverts to defaults. What is the solution here to share?
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried usi... See more...
Hi, I'm trying to add the source information of the metric (Like k8s pod name, k8s node name etc.,) from splunk-otel-collector-agent and then send it to gateway (Data Forwarding model). I tried using attributes and resource processors to add the source info, then enabled those processors in the pipelines in the agent_config.yaml. In gateway_config,yaml, I added the processors with from_attribute to read from agent's attribute. But I couldn't add additional source tags of my metric. Can anyone help here? Let me know if you need more info. I can share. Thanks, Naren
Does anyone know if there is a way to suppress the sending of alerts during a certain time interval if the result is the same as the previous trigger, and if the result changes, it should trigger reg... See more...
Does anyone know if there is a way to suppress the sending of alerts during a certain time interval if the result is the same as the previous trigger, and if the result changes, it should trigger regardless of any suppression or only trigger when there is a new event that causes it to trigger?
We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of Global Education, just dropped a great blog post that showcases the power of habits, learnin... See more...
We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of Global Education, just dropped a great blog post that showcases the power of habits, learning, and community. Drawing from personal experiences – cue swimming progress – Eric connects the dots between building strong habits and achieving career success. He shines a spotlight on the 2024 Splunk Career Impact Report, which is packed with insights from nearly 500 Splunk users across the globe. TLDR: Splunk learners who invest in certifications and skill-building are absolutely thriving. What’s in the Report? Career Wins Galore! The 2024 Splunk Career Impact Report highlights how our community is crushing it—whether it’s earning 14% higher pay on average or snagging double the promotions compared to last year. Eric’s blog breaks it down, showing how Splunk Certifications are more than just badges—they’re game-changers. Certified users, especially early in their careers, are seeing massive salary bumps, with younger professionals earning up to 52% more than their non-certified peers.   Why You Should Read Eric’s Blog Right Now This isn’t just about stats; it’s about celebrating you. Eric highlights how the Splunk community’s feedback helped shape this report, and he shares why continuous learning is the secret sauce to staying ahead in tech. From hands-on labs to our buzzing online forums, Splunk Education offers something for everyone looking to level up. So, what are you waiting for? Dive into Eric’s blog and see how building great habits with Splunk can take your career to the next level! Read Eric’s blog here Check out the full 2024 Splunk Career Impact Report Happy learning! -Callie Skokos on behalf of the entire Splunk Education Crew
I'm under the impression that HEC ingestion directly to the indexers is supported natively on cloud. I wonder whether the HEC ingestion on-prem to the indexers is supported in the same way?