All Topics

Top

All Topics

Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-]    key: 29.09.2023    value_cnt: 2736    value_sum: 51150570.59 } аnd r... See more...
Hello friends! I get JSON like this {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} and so on ... { [-]    key: 29.09.2023    value_cnt: 2736    value_sum: 51150570.59 } аnd row_source like this 10/4/23 1:23:03.000 PM   {"key":"27.09.2023","value_sum":35476232.82,"value_cnt":2338} Show syntax highlighted host = app-damu.hcb.kz source = /opt/splunkforwarder/etc/apps/XXX/pays_7d.sh sourcetype = damu_pays_7d   And i want to get table like this: days sum cnt 27.09.2023 35476232.82 2338 29.09.2023 51150570.59 2736   so i have to get latest events and put it to table. Please help
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarde... See more...
Hi all, I just wanted to ask if there is the possibility to pass username and password when starting splunk forwarder (9.1.1) on a linux system for the first time. Due to Windows Universal Forwarder Installation this is possible during installation via:     msiexec.exe /i splunkforwarder_x64.msi AGREETOLICENSE=yes SPLUNKUSERNAME=SplunkAdmin SPLUNKPASSWORD=Ch@ng3d! /quiet       Is there a similar command for Linux Forwarder?   Best regards
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _inter... See more...
I have configured 5 domain controllers to send log to Splunk by installing UF. I have DC2 and DC5 reporting to Winevenlog as it is configured but I am missing the other 3 DCs. All logging to _internal what should I do to correct the logging.
Hi, I have this command:  | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would lik... See more...
Hi, I have this command:  | mstats avg("value1) prestats=true WHERE "index"="my_index" span=10s BY host | timechart avg("value1") span=10s useother=false BY host WHERE max in top5 and I would like to count the host and trigger when I have less then 3 hosts.  I tired something like that: ```|stats dc(host) as c_host | where c_host > 3,``` but its not working as usual .   any idea? thanks!  
Hi, I'm trying to plot graph for previous 2 weekday average. Below is the query used index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=now | rex field=msg "\"[^\"]*\"\s(?<... See more...
Hi, I'm trying to plot graph for previous 2 weekday average. Below is the query used index="xyz" sourcetype="abc" app_name="123" or "456" earliest=-15d@d latest=now | rex field=msg "\"[^\"]*\"\s(?<status>\d+)" | eval HTTP_STATUS_CODE=case(like(status, "2__"),"2xx") | eval current_day = strftime(now(), "%A") | eval log_day = strftime(_time, "%A") | where current_day == log_day | eval hour=strftime(_time, "%H") | eval day=strftime(_time, "%d") | stats count by hour day HTTP_STATUS_CODE | chart avg(count) as average by hour HTTP_STATUS_CODE  This plots grpah for complete 24hrs.  I wanted to know if I can limit the graph to current timestamp. Say now system time is 11AM. I want graph to be plotted only upto 11AM and not entire 24hrs. Can it be done ? Please advice
Log like. message: [22/09/23 10:31:47:935 GMT] [ThreadPoolExecutor-thread-15759] INFO failed.", suspenseAccountNumber="941548131", suspenseAccountBSB="083021", timeCreate as OTHER BUSINESS REASON r... See more...
Log like. message: [22/09/23 10:31:47:935 GMT] [ThreadPoolExecutor-thread-15759] INFO failed.", suspenseAccountNumber="941548131", suspenseAccountBSB="083021", timeCreate as OTHER BUSINESS REASON returned by CBIS.", debtor RoutingType="BBAN", debtor Routing Id="013013", creditor RoutingType="BBA 6899-422f-8162-6911da94e619", transactionTraceIdentification-1311b8a21-6d6c-422b-8 22T10:31:42.8152_00306", instrId="null", interactionId="null", interactionOriginators tx_uid-ANZBAU3L_A_TST01_ClrSttlmve01_2023-09-22T10:31:42.8152 00306, txId-ANZBAU3L priority-NORM, addressingType=noAlias, flow-N5XSuspense.receive]     How extract the transactionTraceIdentification filed    I tried already rex field= message "transactionTraceIdentification=\"(?<transactionTraceIdentification>.*?)\","   Not extraxted the vaule
Hello, So I have a below dashboard panel which is populated with lookup.  Name     Organization    Count Bob            splunk                 2 Matt           google                15 smith  ... See more...
Hello, So I have a below dashboard panel which is populated with lookup.  Name     Organization    Count Bob            splunk                 2 Matt           google                15 smith          facebook            9  What I'm looking for is.  1. If I click the Bob, it has to open a new search tab with the query "| inputlookup mydetails.csv | search Name=Bob " 2. If I click the Splunk, it has to open a new url with "www.splunk.com" For all the values respectively.  How do I achieve this within one ?
Anyone have an idea on the below issue? | inputlookup test the lookup table file and definition both are available, both of the permissions are set to read(everyone)- set to app level, but when i a... See more...
Anyone have an idea on the below issue? | inputlookup test the lookup table file and definition both are available, both of the permissions are set to read(everyone)- set to app level, but when i am trying to inputlookup i am seeing the error. Initially the lookup definition is set to read everyone and lookup file is set to read admin, so i changed it to everyone this afternoon and tried the below search but i am still getting below error | inputlookup test The lookup table 'test' requires a .csv or KV store lookup definition. The lookup table 'test' is invalid. Btw this is on Production Search head in a clustered environment
Hello, I was trying to use REGEX command within props/transforms conf files to extraction fields, but field extraction is not working. Two sample events and my props/transforms conf files are given ... See more...
Hello, I was trying to use REGEX command within props/transforms conf files to extraction fields, but field extraction is not working. Two sample events and my props/transforms conf files are given below. Any recommendations will be highly appreciated. Thank you so much. props.conf [mysourcetype] SHOULD_LINEMERGE=false LINE_BREAKER = ([\r\n]+) TIME_PREFIX=^ TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=24 TRUNCATE = 9999 REPORT-fieldEtraction = fieldEtraction   transforms.conf [fieldEtraction] REGEX = \{\"UserID\":\"(?P<UserID>\w+)\","UserType":\"(?P<UserType>\w+)\","System":\"(?P<System>\w+)\","UAT":\"(?P<UAT>.*)\","EventType":\"(?P<EventType>.*)\","EventID":"(?P<EventID>.*)\","Subject":"(?P<Subject>.*)\","EventStatus":"(?P<EventStatus>.*)\","TimeStamp":\"(?P<TimeStamp>.*)\","Device":"(?P<Device>.*)\","MsG":"(?P<Message>.*)\"}   Samples Events  2023-10-03T18:56:31.099Z OTESTN097MA4513020 TEST[20248] {"UserID":"8901A","UserType":"EMP","System":"TEST","UAT":"UTA-True","EventType":"TEST","EventID":"Lookup","Subject":"A516617222","EventStatus":"00","TimeStamp":"2023-10-03T18:56:31.099Z","Device":" OTESTN097MA4513020","Msg":"lookup ok"} 2023-10-03T18:56:32.086Z OTESTN097MA4513020 TEST[20248] {"UserID":"8901A","UserType":"EMP","System":"TEST","UAT":"UTA-True","EventType":"TEST","EventID":"Lookup","Subject":"A516617222","EventStatus":"00","TimeStamp":"2023-10-03T18:56:32.086Z","Device":" OTESTN097MA4513020","Msg":"lookup ok"}
Register here. This thread is for the Community Office Hours session on Splunk Search on Wed, Dec 13, 2023 at 1pm PT / 4pm ET.   This special 1-hour session is your opportunity to ask questions rel... See more...
Register here. This thread is for the Community Office Hours session on Splunk Search on Wed, Dec 13, 2023 at 1pm PT / 4pm ET.   This special 1-hour session is your opportunity to ask questions related to your specific Splunk Search challenge, use case, best practices, or any new features/capabilities in search. Including: Tips & tricks for faster searches, scheduled searches, etc. Best practices for optimizing search performance  Using SPL commands  Federated search (e.g., for Amazon S3) Creating alerts, visualizations, and dashboards from searches How to translate your questions into SPL Anything else you’d like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.    Look forward to connecting!
Register here. This thread is for the Community Office Hours session on Security: SOAR on Wed, Nov 29, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific... See more...
Register here. This thread is for the Community Office Hours session on Security: SOAR on Wed, Nov 29, 2023 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific Splunk Security orchestration, automation, and response (SOAR) challenge or use case. Including: What's new in SOAR 6.2 (Logic Loops, CyberArk integration, etc.) Attack Analyzer  Developing Playbooks, Workbooks and process workflows Integrating security, IT operations and threat intelligence tools Automatic incident response Automating threat hunting, penetration testing, etc. Applying configuration changes, app installation, and maintenance Success measurement Anything else you'd like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
Hello, I am seeing the below error in the internal logs, I am on Splunk On premise clustered environment. 10-03-2023 23:48:50.697 +0000 ERROR SearchParser [110001 TcpChannelThread] - The search s... See more...
Hello, I am seeing the below error in the internal logs, I am on Splunk On premise clustered environment. 10-03-2023 23:48:50.697 +0000 ERROR SearchParser [110001 TcpChannelThread] - The search specifies a macro 'get_tenable_sourcetype' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information. How do i need to get rid of this error from our internal logs. I have checked under Macros and all configurations and I dont see this macro. But inside the TA-tenable/local/macros.conf i see only  [get_tenable_index] definition = (index=abc) iseval = 0 Please help me with your thoughts. Thanks
Hello, I am seeing the below error in the internal logs. The lookup table XYZ does not exist or not available I have checked in Lookup table files, Lookup definitions, Automatic Lookups but didn't... See more...
Hello, I am seeing the below error in the internal logs. The lookup table XYZ does not exist or not available I have checked in Lookup table files, Lookup definitions, Automatic Lookups but didn't find this lookup.How do i need to get rid of this error, any suggestions please.   Thanks
Hello guys. This is my first post here to ask for help with extracting fields from a JSON object. Below is an example of the record: {"pod":"fmd9p","time":"2023-10-03T21:49:39.31255352Z", "source":... See more...
Hello guys. This is my first post here to ask for help with extracting fields from a JSON object. Below is an example of the record: {"pod":"fmd9p","time":"2023-10-03T21:49:39.31255352Z", "source":"/var/log/containers/fmd9p_default.log","container_id":"1ae53e1be","log": "I1003 14:49:39.312453 test_main.cc:149] trace_id=\"8aeb0\" event=\"Worker.Finish\" program_run_sec=25.1377 status=\"OK\""} How can I extract trace_id, event, program_run_sec, and status from the log section automatically by setting up a sourcetype? Is it doable? Thanks for any help and advise
I have bunch of alerts, I received email alert, but I did not receive auto cut incident to service now How to troubleshoot this issue?????
Hello, I'm working with a Splunk cluster which has two slave peers and I need to disable an index on the Cluster Master using the REST API. I've tried the usual endpoint (/servicesNS/nobody/{app}/co... See more...
Hello, I'm working with a Splunk cluster which has two slave peers and I need to disable an index on the Cluster Master using the REST API. I've tried the usual endpoint (/servicesNS/nobody/{app}/configs/conf-indexes/{index}) as this doc says (https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTconf#configs.2Fconf-.7Bfile.7D.2F.7Bs... ), but it doesn't seem to work on the Cluster Master. Can someone please provide me with the specific REST API endpoint I should use to disable an index on the Cluster Master? I have read the documentation https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTcluster but there is no reference to what I need. Thank you in advance for your assistance
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionAppli... See more...
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionApplication and have it return Prometheus metrics. The response status, however, code = `500` and content = `Unexpected character while looking for value: '#'` Prometheus metrics do not confirm to any of the supported `output_modes` (atom | csv | json | json_cols | json_rows | raw | xml) so I get the same error irrespective of the output mode chosen. Is there a way to bypass the output check? Is there any other alternative to host a non-confirming-format output via a Splunk REST API?
Welcome to the Community Member Spotlight series. In this edition, Radhika Bhatia, a Site Reliability Engineer, shares her perspective, tips, use cases, and more…   — Ryan Paredez & Claudia Landiva... See more...
Welcome to the Community Member Spotlight series. In this edition, Radhika Bhatia, a Site Reliability Engineer, shares her perspective, tips, use cases, and more…   — Ryan Paredez & Claudia Landivar, Community Managers    In this post  Your work  Working with AppD products  Keeping up with the evolving technical landscape  Life after-hours  Parting insights    Your work  Radhika Bhatia, Site Reliability Engineer Could you give us a picture of 'a day in the life' ?  My workday revolves around client engagement, internal collaboration, solution design, and research. I start by talking to clients and stakeholders to understand their critical requirements, including custom dashboards and proactive reporting needs.   These discussions inform our team's strategy, and we work together to design solutions, often initiating proof of concepts for new clients, followed by implementations. We also establish standardized monitoring practices while tailoring plans for each client's specific use cases.    How did you get involved with this work?  I had the fortunate opportunity to kickstart my career with the service design team, where I worked towards establishing the fundamental observability infrastructure for several prominent financial institutions worldwide. Although my professional journey so far has been relatively short, I deeply value the knowledge and skills I've gained during these years.    What has fed your interest in your field?  My motivation came from observing my senior colleagues making critical decisions that directly and instantly impacted our clients' experiences. Witnessing the positive outcomes of resolving persistent issues and enhancing visibility within applications has consistently inspired me to strive for continuous improvement in my role.   Back to TOC Working with AppD Products  What first brought you to the AppD community?  When I first joined the company, we were in the process of exploring AppDynamics primarily for proactive monitoring and enhancing visibility for our clients and stakeholders through dashboards and reports. The AppDynamics Community quickly became our primary resource for resolving any challenges or questions that arose, and it continues to be a valuable source of support today.    Can you tell us about a positive experience you’ve had with the community?  In a specific instance, my team encountered a complex issue that had proven difficult to resolve through internal efforts. Despite multiple efforts, it was the AppDynamics community that ultimately provided a straightforward yet effective solution. This solution not only resolved the immediate issue but also became a valuable best practice that we continue to use to this day.    How do you use AppD in your role?  AppDynamics is widely used in my organization, and I am responsible for things like:  Instrumenting AppDynamics for all our client environments Designing dashboards for stakeholders Ensuring proactive monitoring using health rules, alerts, and HTTP integrations Creating a standard monitoring and observability plan for all our clients Every use-case of AppDynamics is an interesting experience as it offers a new level of visibility into the application to both our organization and the client’s (organization). We have been able to use AppDynamics to solve some of the longest-running issues.  It has many times provided capabilities and views that we didn’t know we really needed.    What are your top 2 AppDynamics hot tips?  Leverage the AppDynamics Community  When encountering challenges, don't hesitate to turn to the AppDynamics community—not just for browsing existing issues but also for raising new ones.   The community is remarkably responsive, and it's a fantastic way to learn from others' experiences and discover novel approaches to using the tool effectively.    Take advantage of AppDynamics' database custom metric feature  This functionality is invaluable for capturing and alerting on metrics that might otherwise be challenging to obtain. One such example for us is blocking sessions. It has helped provide our DBAs with timely notifications about critical blocking sessions.  Back to TOC   How do you keep up with the evolving technical environment?  What’s your best way of keeping up with industry news?  For me, it’s mostly the recommendation feed on Google Discover, along with some items on the InShorts app, that very well understand my line of interest.    Additionally, I'm fortunate to have a network of friends, colleagues, and family members in the IT field, which often leads to casual conversations that provide valuable insights and updates.    What have you learned in the past year that you wish you had known when you started your career?  What I've realized in the past year is that the learning journey is ongoing. Every new skill acquired brings a deeper understanding of our chosen field and refines our work. It's essential to embrace this constant process, as there's always more to learn and contribute to our careers.    How—or where—do you find inspiration?  Inspiration primarily arises from immersing myself in the depths of my work. The satisfaction of tackling new challenges and delving into intricate problem-solving motivates me to explore further and dig deeper into the details. Back to TOC    What about life after-hours?  Outside of work, I have a deep appreciation for reading books, as they provide valuable insights into the lives and thoughts of remarkable individuals. Exploring different perspectives helps me gain a better understanding of both others and myself. In line with this interest, I thoroughly enjoy engaging in conversations with people, delving into their life experiences and seeking their opinions on various topics.  Back to TOC Parting insights  What advice would you give someone who is up and coming in your field?  Learn everything that interests you, it will somehow prove to be a beneficial skill to have.  Pay particular attention to what draws you into the depths of understanding, as these passions ultimately shape your identity.   Back to TOC
Good afternoon, Background: I found a configuration issue in one of our firewalls which I'm trying to remediate where an admin created a very broad access rule that has permitted traffic over a wid... See more...
Good afternoon, Background: I found a configuration issue in one of our firewalls which I'm trying to remediate where an admin created a very broad access rule that has permitted traffic over a wide array of TCP/UDP ports. I started working to identify valid traffic which has used the rule, but a co-worker mentioned an easy win would be creating an ACL to block any ports which had not already been allowed through this very promiscuous rule. My problem is I know how to use the data model to identify TCP/UDP traffic which has been logged egressing through the rule, but how could I modify the search provided below so that I can get a result that displays which ports have NOT been logged? (Also bonus points if you can help me view numbers returned as ranges rather than individual numbers aka "5000-42000") Here is my current search:   | tstats ,values(All_Traffic.dest_port) AS dest_port values(All_Traffic.dest_ip) AS dest_ip dc(All_Traffic.dest_ip) AS num_dest_ip dc(All_Traffic.dest_port) AS num_dest_port FROM datamodel=Network_Traffic WHERE index="firewall" AND sourcetype="traffic" AND fw_rule="horrible_rule" BY All_Traffic.dest_port | rename All_Traffic.* AS *   Thank you in advance for any help that you may be able to provide!
Hello!  I'm trying to figure out a way to display a single value that calculates users who have disconnected divided by the time range based on the time picker.   The original number comes from the... See more...
Hello!  I'm trying to figure out a way to display a single value that calculates users who have disconnected divided by the time range based on the time picker.   The original number comes from the avg of total disconnects divided by the distinct user count.  I need to divide that number by the number of days which is based on the time picker.  The goal is to get the avg user disconnects per day based on time frame selected in time picker.  For example if there are 100 disconnects and 10 distinct users =10, then divided by the number of days selected in picker(7) should equal 1.42 disconnects per day.  I hope that makes sense.  Here is my search: Index=... Host=HostName  earliest=$time_tok.earliest$ latest=$time_tok.latest$ | stats count by "User ID" |search "User ID"=* |stats avg(count) That will only give me the Total disconnects divided by Distinct users, but I need that number divided by the time picker number of days and I can't get it to work.  Thank you!!!