All Topics

Top

All Topics

When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode ... See more...
When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode searchhead -master_uri <Indexer Cluster Master URI>"    command linked search head cluster nodes with index clusters.   I want to write data to the index of this test through the Splunk API and obtain the written data from other search header nodes, but I found that it is not working. Is this related to my previous creation of indexes in the search header node. If it's relevant, how can I remove the index from the search header cluster? 2、Is kvsore data synchronized in the search head cluster. What should I do if I want to clean up the environment and delete a KVStore in the search header cluster? 3、What is the data communication mechanism between search head clusters, and do I want to achieve some data synchronization on add on between multiple search heads? Is there any good method? BR!
Hi, I have data like these entries link          id                     parent     name ----          ---                     --------     --------- link1      311                                ... See more...
Hi, I have data like these entries link          id                     parent     name ----          ---                     --------     --------- link1      311                                   email.eml link1      312                  311         abc.rar link2      315                  312         xyz.exe   that I want to combine into this   link                       id                              parent              name ----                       ---                              --------               --------- link1, link2      315, 312, 311        312, 311         xyz.exe, abc.rar, email.eml   Combining condition is based on id and parent. 311 is the parent, 312 is child of 311, 315 is child of 312 ('grandchild' of 311) Thank you in advance for your help!
Hi I'm upgrading and migrating my Splunk enterprise 8.1.1 running on windows server 2012 R2. Anyone have a recommended path for this? Upgrade first, or migrate first? Usually I would prefer to upgra... See more...
Hi I'm upgrading and migrating my Splunk enterprise 8.1.1 running on windows server 2012 R2. Anyone have a recommended path for this? Upgrade first, or migrate first? Usually I would prefer to upgrade first, but I see that 8.2 is not supported on windows server 2012 R2.  
Hi Team, We are new to Splunk SIEM, Need to create real time use cases based on MITRE Framework for Linux and Palo Alto log sources in customer environment. Kindly help on this.
Hello, I am creating a Simple XML dashboard (with panels refreshing every 10 or 30 seconds), replicating a Live Telephony System Dashboard (which refreshes every 5 seconds). A python script is fetc... See more...
Hello, I am creating a Simple XML dashboard (with panels refreshing every 10 or 30 seconds), replicating a Live Telephony System Dashboard (which refreshes every 5 seconds). A python script is fetching data from Telephony System using RestAPI every 10 seconds and pushes to Splunk using HEC. Panles on Splunk Dashboard works ok most of the times, unless there are multiple live calls going on at a time or multiple users are accessing this dashboard. In later case, searches are taking long to complete (because they are in queue due to multiple users seeing the dashboard at the same time?). What is the best way to handle this scenario? Thank you.
Hi, I have some old splunk indexed data ( splunk buckets ) in version 6.6. Can I just copy them to another splunk server, which is version 8.2? Is there will be any issue of compatibility?  
I am trying to query a Splunk search head using the Splunk connector from SOAR. However, my playbook is giving an error in the action block with the below error: Failed to connect to splunk server. ... See more...
I am trying to query a Splunk search head using the Splunk connector from SOAR. However, my playbook is giving an error in the action block with the below error: Failed to connect to splunk server. HTTP Error 400: Bad Request (1235) There are no issues of connectivity as I have tested the connectivity to our asset in the app and it has passed successfully. Yet, my playbook is failing with the above error. My playbook design consists of a format block that formats the simple SPL query as : |makeresults|eval id="This is a test" |eval playbook="App upgrade splunk"|table _time id playbook which is referenced in the action block that queries a Splunk Search Head using the Splunk app. Any advise on the possible issue is much appreciated ? Thanks in advance  
I've got this search index=main sourcetype="bigfix" | eval raw=_raw | rex mode=sed field=raw "s/\n/ /g" | rex field=raw "At \d+:\d+:\d+\s+-0800\s+-(?<message>.*)" | rex field=message "^(?<message_... See more...
I've got this search index=main sourcetype="bigfix" | eval raw=_raw | rex mode=sed field=raw "s/\n/ /g" | rex field=raw "At \d+:\d+:\d+\s+-0800\s+-(?<message>.*)" | rex field=message "^(?<message_type>[^:]+):\s" | eval message_type_ns=replace(message_type, " ", "") | eval x_message_type=if(message_type == message_type_ns, message_type, "No message type") | stats count by message_type, message_type_ns, x_message_type That doesn't appear to be working correctly.  I'm always getting either all true or all false.  This is the output. "message_type","message_type_ns","x_message_type",count " ActionLogMessage",ActionLogMessage,"No message type",240 " ActiveDirectory",ActiveDirectory,"No message type",128 " Client has an AuthenticationCertificate Relay selected",ClienthasanAuthenticationCertificateRelayselected,"No message type",2 " Client shutdown (Service manager shutdown request) ******************************************** Current Date","Clientshutdown(Servicemanagershutdownrequest)********************************************CurrentDate","No message type",3 " Encryption",Encryption,"No message type",11 " Initializing Site",InitializingSite,"No message type",43 " PollForCommands",PollForCommands,"No message type",13 " Processing fixlet site. ******************************************** Current Date","Processingfixletsite.********************************************CurrentDate","No message type",1 " RegisterOnce",RegisterOnce,"No message type",149 " Report posted successfully ******************************************** Current Date","Reportpostedsuccessfully********************************************CurrentDate","No message type",1 " Restricted mode Initializing Site",RestrictedmodeInitializingSite,"No message type",3 " User interface process disabled for user 'user' ActiveDirectory","Userinterfaceprocessdisabledforuser'user'ActiveDirectory","No message type",1 " User interface process disabled for user 'user' ActiveDirectory","Userinterfaceprocessdisabledforuser'user'ActiveDirectory","No message type",1 " User interface session ended for user 'user' User interface session ended for user 'user' ******************************************** Current Date","Userinterfacesessionendedforuser'user'Userinterfacesessionendedforuser'user'********************************************CurrentDate","No message type",1 " User interface session ended for user 'user' ActiveDirectory","Userinterfacesessionendedforuser'user'ActiveDirectory","No message type",1 " User interface session ended for user 'user' ******************************************** Current Date","Userinterfacesessionendedforuser'user'********************************************CurrentDate","No message type",1 When I try this simple case, it works. | makeresults | eval string_a="Client shutdown (Service manager shutdown request) ******************************************** Current Date" | eval string_b="Client_shutdown_(Service_manager_shutdown_request)_********************************************_Current_Date" | eval my_string=if(string_a == string_b, string_a, string_b) And the output _time my_string string_a string_b 2023-12-07 10:14:17 Client_shutdown_(Service_manager_shutdown_request)_********************************************_Current_Date Client shutdown (Service manager shutdown request) ******************************************** Current Date Client_shutdown_(Service_manager_shutdown_request)_********************************************_Current_Date What I'm trying to do is find these At 09:01:45 -0800 - Encryption: optional encryption with no certificate; reports in cleartext The above would have message_type=Encryption.  This example At 09:00:39 -0800 - Starting client version xx.yy.zz.aa FIPS mode disabled by default. Cryptographic module initialized successfully. Using crypto library libBEScrypto - OpenSSL would have message_type="No message type".  I've tried using colon (:), but there are messages with embedded colons.  Any thoughts on how to solve this are appreciated. TIA, Joe
How to display timechart for specific time period for specific business days. Eg: index="someindex" |dedup eventid| timechart count(_raw) by eventName span=60m for monday,tuesday, wednesday, thursda... See more...
How to display timechart for specific time period for specific business days. Eg: index="someindex" |dedup eventid| timechart count(_raw) by eventName span=60m for monday,tuesday, wednesday, thursday, friday during 6pm - 8pm.  Or for specific dates .How can achieve this?  thanks in advance
I have a data like this. {      ...    name: AppName    metrics: {      data: [        {           details: { ...          }          name: dataName1          status: UP        }        {... See more...
I have a data like this. {      ...    name: AppName    metrics: {      data: [        {           details: { ...          }          name: dataName1          status: UP        }        {           details: { ...          }          name: dataName2          status: UP               }        { ...        }      ]      indicators: [...]      status: DOWN    }    logs: { ...    }    ping: 1 } I tried to extract data each name and status inside the data out, so I called spath output=metrics path=metrics |rename metrics.data{}.name as name, metrics.data{}.status as status | table _time, name, status This gives proper table _time name status 2023-12-07 15:36:28 dataName1 dataName2 dataName3 UP DOWN UP 2023-12-07 15:35:29 dataName1 dataName2 dataName3 DOWN DOWN UP 2023-12-07 15:34:30 dataName1 dataName2 dataName3 DOWN UP DOWN   However, after putting this search into the dashboard studio search query, it simply returned "No Search Result Returned". Is there something wrong with rename?   Thank you!
Novice observability practitioners are often overly obsessed with performance. They might approach instrumentation with skepticism and have concerns about latency degradation or resource consumption.... See more...
Novice observability practitioners are often overly obsessed with performance. They might approach instrumentation with skepticism and have concerns about latency degradation or resource consumption. The attached article is a primer on the topic of instrumentation overhead, and it will teach you how to think about overhead in an observability context. In it, we cover the causes of overhead, why overhead is so hard to measure, and why it's even harder to reliably predict. Lastly, we will present some practical techniques for understanding overhead in your environment and some strategies for coping with it. Due to length limitations in the blogging platform, this topic is presented as a PDF white paper (attached).
Can you apply transformative operations inside set tags from drilldown tags? ex: <drilldown> <set token="form.builds_tk">$click.value$</set> </drilldown>   Would like to take the value capture... See more...
Can you apply transformative operations inside set tags from drilldown tags? ex: <drilldown> <set token="form.builds_tk">$click.value$</set> </drilldown>   Would like to take the value captured from click value, split it (or regex), then use the first value. ex: <drilldown> <set token="form.builds_tk">mvindex(split("$click.value$", "-"), 1)</set> </drilldown>  
Hello When I turned on Total for Statistics under Format > Summary, the output shows long digit after decimal point: Total: 1129.3600000000001 How do I round this number to 1129 or 1130?   Thank ... See more...
Hello When I turned on Total for Statistics under Format > Summary, the output shows long digit after decimal point: Total: 1129.3600000000001 How do I round this number to 1129 or 1130?   Thank you   | makeresults format=csv data="Student, Score a,153.8 b,154.8 c,131.7 d,115.4 e,103.2 f,95.4 g,95.4 h,93.2 i,93.2 j,93.26" | table Student, Score    
IDC Report: Enterprises Gain Higher Efficiency and Resiliency With Migration to Cloud  Today many enterprises are adopting a cloud-first strategy to get faster time to value and scale their bu... See more...
IDC Report: Enterprises Gain Higher Efficiency and Resiliency With Migration to Cloud  Today many enterprises are adopting a cloud-first strategy to get faster time to value and scale their business. As expansion to the cloud continues, IT leaders are continuously looking for better ways to strengthen security and focus more on driving business value.   Register for our Get Resiliency in the Cloud event, taking place January 18th, 2024 from 8:30 - 11:30 AM PST, to hear directly from IDC and Pacific Dental Services share benefits and best practices for migrating to cloud.  Migrating on-premises deployments to Splunk Cloud Platform delivered as a software-as-a-service (SaaS) offers a win-win for enterprises. In this white paper, IDC examines the drivers and benefits of moving to the cloud across three current Splunk customers who migrated to Splunk Cloud Platform and saw immediate benefits:  HSBC: Accelerated time to value and increased scalability by >300% Pacific Dental Services: Increased operational efficiencies by more than 40% GAF: Realized annual cost savings by 20% By moving to Splunk Cloud Platform, these organizations were able to free IT operations and security teams from the daily maintenance of on-premises solutions, reduced administrative overhead costs, streamlined upgrade cycles, improved cross-team collaboration, and enabled businesses to provide new offerings to their customers quickly. As a bonus, they have been able to take advantage of new features and versions quickly and with less risk.  As more and more firms see flat or negative budgets in the coming year, the need for more efficient operations and quicker time to value means that the growth of cloud and SaaS solutions is likely to rapidly continue. As enterprises migrate new deployments to the cloud, proper planning, understanding the migration process, and managing the impact across people and technologies is essential. Besides reporting the customer benefits of migrating to cloud, IDC provides a comprehensive blueprint on the best practices and recommendations for completing a successful migration to the cloud, delivered as a SaaS solution. This blueprint includes considerations such as leadership buy-in, extensive up-front planning, thorough understanding of the on-premises environment, available tools, services and resources from the vendor before migration. It recommends several pieces of advice for IT leaders and teams moving to the cloud regardless of the target SaaS provider.  Learn more:  Register for the cloud event Get Resiliency in the Cloud to hear from IDC and Pacific Dental Services share benefits and best practices for migrating to cloud. Read the full IDC report - Enterprises Report Benefits of Migrating to Splunk Cloud Platform.
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  Hear the industry experts from Pacific Dental Services, IDC and The Futurum Group CEO, Daniel Newman share how to buil... See more...
Join The Event Get Resiliency in the Cloud on January 18th, 2024 (8:30AM PST)  Hear the industry experts from Pacific Dental Services, IDC and The Futurum Group CEO, Daniel Newman share how to build a strong foundation of security and resilience for your expansion to the cloud. Learn about the drivers and benefits that lead enterprises to build data-centric security and observability on a unified Splunk Cloud Platform, delivered as a service . By migrating deployments to Splunk Cloud Platform organizations are able to search, analyze, visualize and act on their data with unprecedented insights, security and compliance, all from the cloud. Additionally, you will learn about: How digital transformation is influencing businesses expand to cloud - A talk by Futurum CEO, Daniel Newman Cloud transformation journey from Pacific Dental Services with Splunk New advancements in Splunk Cloud Platform that accelerate journey to cloud Achieving faster value realization with Splunk services
The tenth leaderboard update (11.23-12.05) for The Great Resilience Quest is out >> As our brave questers navigate through this adventurous journey, it is incredible to see the progress you ha... See more...
The tenth leaderboard update (11.23-12.05) for The Great Resilience Quest is out >> As our brave questers navigate through this adventurous journey, it is incredible to see the progress you have made. Let's check out the latest standings on the leaderboard! Shout out to you all!!!!  Prizes Await Stay motivated! The next round of Adventurer's Bounty and Champion's Tribute winners will be announced soon! Fantastic prizes await those who reach the top. Keep your eyes on the prize and your mind focused on the quest. Keep up the great work, questers! Your determination are what make you became more and more resilient on the journey.  Best regards, Splunk Customer Success
I am working with Linux auditd events based on the auditd message and field dictionaries, that we call type and field. (You can access the github site for the .csv files that define message and field... See more...
I am working with Linux auditd events based on the auditd message and field dictionaries, that we call type and field. (You can access the github site for the .csv files that define message and fields.) For example, the macro name AUDIT_ADD_GROUP would be type=add_group and the macros name AUDIT_EXECVE would be type=execve. Now we have fields by type. SGID is the set group ID, so we could have fields called execve.sgid or add_group.sgid depending on the type value of the event. These are just 2 of more than 40 types we are tracking. Now each type will have its own set of applicable fields. For example, there would also be add_group.tty and add_group.proctitle. Is there a way to automatically lop off the prefix of a dot notation field on ingest? We need to standardize these fields to make them CIM compliant for our data model. The only alternative I see for now would be to use COALESCE to solve this problem. (e.g.: eval sgid = coalesce('group_add.sgid', 'execve.sgid')) Doing it this way would see COALESCE expressions with numerous paraeaters.
Hi All,   we have our server that's reaching EOL and is currently a deployment server for 4k clients and we need to migrate to new machine. can anyone help to tell the steps to test the connectivi... See more...
Hi All,   we have our server that's reaching EOL and is currently a deployment server for 4k clients and we need to migrate to new machine. can anyone help to tell the steps to test the connectivity with new ds and then ultimately migrate to new ds server 
Hi, I am not sure if this is possible at all or not, but I figured best to ask the experts before I keep spinning in circles. I have created a classic dashboard, and would like to add the ability... See more...
Hi, I am not sure if this is possible at all or not, but I figured best to ask the experts before I keep spinning in circles. I have created a classic dashboard, and would like to add the ability to toggle the visibility of the column chart data by having the user click on any of the desired legend label of the data series, and the columns belonging to that data visibility gets toggled Off or On.  So in the below example, the column chart is displaying 2 labels in the legend, "Used" and "Discount" at the start, and I would like to have the user toggle that view. I do not have access to the backend server and would like to do everything from the GUI. I would like the user to be able to click on the legend "Used" listed entry, and the column chart would remove the "Used" columns, and only display the "Discount" columns preferably expanded to the width of the column chart. I have seen it occur in one of the other column chart within the same dashboard, and  I have not added or modified anything to create that. The Drilldown option is set to None for this panel, and all other panels, yet by some magic the other panels sometimes behave to toggle Off/On the data being display by clicking on the legend labels. The section for this panel xml is below, and any help would be greatly appreciated: <panel> <chart id="chart1"> <title>Titte of the Dashboard</title> <search base="base_search"> <query>| search merchant IN ($merchant$) | chart sum(used) as Used sum(Discount) as Discount over _time by merchant | addcoltotals row=f col=t label="Totals" labelfield=merchant fieldname="Totals" Used Discount</query> </search> <option name="charting.axisTitleX.visibility">visible</option> <option name="charting.axisTitleY.visibility">visible</option> <option name="charting.axisTitleY2.visibility">visible</option> <option name="charting.axisY2.enabled">0</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">none</option> <option name="charting.drilldown">none</option> <option name="charting.legend.placement">right</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> </chart> </panel>
index=cs | rex "Type=(?<type>[a-z]+)" | rex field=AResponse.BResponse.Message mode=sed "s/Ref number+\w+\sfailed on num:*+/NetworkA failed on num: /g" Here I hardcoded NetworkA  in second rex ... See more...
index=cs | rex "Type=(?<type>[a-z]+)" | rex field=AResponse.BResponse.Message mode=sed "s/Ref number+\w+\sfailed on num:*+/NetworkA failed on num: /g" Here I hardcoded NetworkA  in second rex but actually its a dynamic value and it should be changed according to value present in field type How to use type value in second rex