All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The REPORT setting is incorrect.  The "REPORT-" keyword identifies this as a report setting and so must be on the left side ("Name").  The name of the transform stanza goes on the right side ("Value").
Most of the work of a query is done by the indexers so they need to know as much about the search as possible.  That is what the knowledge bundle is for.  To know what results to return to the SH, th... See more...
Most of the work of a query is done by the indexers so they need to know as much about the search as possible.  That is what the knowledge bundle is for.  To know what results to return to the SH, the peers need to know the values of the tags, eventtypes, and macros used in the query.  The also need to know which fields to extract and how to extract them.  It's all part of the map/reduce process where the search activity is divided among many peers to make the query faster. Information sent in the bundle does not modify the settings in the indexer.  The bundle supplements the information the peer read from its .conf files.  That supplementary data is not visible to either btool or splunk show.
OP is aparently looking for someone to produce an output useable as answer to a question in some course quiz. (Hence Q1).
So for our Final year project we have been assigned the project of implementing DDOS and detecting it with Splunk Now our issue is that we are not getting any logs from the Splunk's ADD DATA INPUT o... See more...
So for our Final year project we have been assigned the project of implementing DDOS and detecting it with Splunk Now our issue is that we are not getting any logs from the Splunk's ADD DATA INPUT option of Local Windows Networking Monitoring which seems to work for the video I was following to do that Context of DDOS:  SO we are using hping3 tcp syn flood attack but their logs aren't getting in through my newly added data input source  All the other network logs are generating like network from my gcp to rdp to server and back but these are the only type of logs that are showing Now if I were to guess the problem it might be that there are two IP provided to us by GCP Internal and External IP I've attacked on both but there is no difference in the incoming LOGS I've checked the connectivity between the two VM's on GCP i.e. Win and Ubuntu  using ping and telnet  Also have turned off the rdp win's firewall also added a firewall rule that allows ingress tcp packets over the port 80 and 21 (which we are attacking on) So my guess ultimately is that the server of GCP is blocking these type of packets I'm still not sure how all these things work(I'm a AI dev you see this is not my field) SO Please help me if you can and have time to!| THANK YOU for reading my question and taking your time for doing it IF you have any other questions that you need the answers for to help me be free to ask away as much you guys want
Table 1 has single values for the columns per each service, while Table 2 has multiple rows per service. You could duplicate the rows of Table1 to fill the rows of Table 2, or you could make the fiel... See more...
Table 1 has single values for the columns per each service, while Table 2 has multiple rows per service. You could duplicate the rows of Table1 to fill the rows of Table 2, or you could make the fields of Table 2 turn into multi-value fields in Table 1. E.g. to do the latter (multi-value field) option: <query 1> | append [ <query2> ] | stats values(*) as * by service  
Hi @mariamms , here you can find all the information you need about HEC: at first https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf (page 28) https://docs.splunk.com/... See more...
Hi @mariamms , here you can find all the information you need about HEC: at first https://www.splunk.com/en_us/pdfs/tech-brief/splunk-validated-architectures.pdf (page 28) https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/ShareHECData https://www.youtube.com/watch?v=qROXrFGqWAU In few words, you have to creare an HEC received creating a token that must be passed to the sender. You can also have an intermediate Load Balancer for HA features, in this case, you must have the same token in all the receivers. About Console, what do you mean? HEC hasn't any console. If your're speaking of the Cluster consoles, you have to search for Cluster Master (for Indexer Cluster) and SH Deployer (for Search Head Cluster). You can find information at https://docs.splunk.com/Documentation/Splunk/9.2.0/Indexer/Aboutclusters and https://docs.splunk.com/Documentation/Splunk/9.2.0/DistSearch/AboutSHC At least there's also a Monitorig Console and you can find information at https://docs.splunk.com/Documentation/Splunk/9.2.0/DMC/DMCoverview  Ciao. Giuseppe
Hello I have a csv file in azure i've created an input at "Splunk Add-on for Microsoft Cloudservices Storage Blob" app Also, ive created this in the sourcetype : this is the transforms : ... See more...
Hello I have a csv file in azure i've created an input at "Splunk Add-on for Microsoft Cloudservices Storage Blob" app Also, ive created this in the sourcetype : this is the transforms : and this is the field extraction : but the logs does not parse it indexed as one line  Am i missing something ?
Ideas votes are a precious resource. As @isoutamo suggested, try bugging the PM periodically through Slack. Hopefully, they'll pick it up without the minimum number of required votes.
Hi @jasuchung, You can take advantage of Simple XML's automatic positioning of multiple visualizations within a panel for automatic alignment and adjust sizes as needed: <row> <panel> <table/... See more...
Hi @jasuchung, You can take advantage of Simple XML's automatic positioning of multiple visualizations within a panel for automatic alignment and adjust sizes as needed: <row> <panel> <table/> </panel> <panel> <table/> <table/> </panel> </row> <dashboard version="1.1" theme="light"> <label>panel_layout</label> <row depends="$alwaysHideCSS$"> <html> <style> #table_ref_base { height: 800px !important } #table_ref_red { height: 400px !important } #table_ref_org { height: 400px !important } </style> </html> </row> <row> <panel> <table id="table_ref_base"> <title>table_ref_base</title> <search> <query>| makeresults</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <table id="table_ref_red"> <title>table_ref_red</title> <search> <query>| makeresults count=15</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> <table id="table_ref_org"> <title>table_ref_org</title> <search> <query>| makeresults</query> <earliest>0</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard> Whitespace may still be visible depending on the number of results in the table; however, the tables on the right should be divided evenly within the available vertical space.
Small correction: NO_BINARY_CHECK = true (If I edit my original answer, the formatting will be mangled.)
My previous answer used a single source example, but you can modify unarchive_cmd settings per-source as needed.
Hi @abi2023, If you need to mask data pre-transit for whatever reason and the force_local_processing setting doesn't meet your requirements, you can use the unarchive_cmd props.conf setting to strea... See more...
Hi @abi2023, If you need to mask data pre-transit for whatever reason and the force_local_processing setting doesn't meet your requirements, you can use the unarchive_cmd props.conf setting to stream inputs through Perl, sed, or any command or script that reads input from stdin and writes output to stdout. For example, to mask strings that might be IPv4 addresses in a log file using Perl: /tmp/foo.log This is 1.2.3.4. Uh oh, 5.6.7.8 here. Definitely not an IP address: a.1.b.4. 512.0.1.2 isn't an IP address. Oops. inputs.conf [monitor:///tmp/foo.log] sourcetype = foo props.conf [source::/tmp/foo.log] unarchive_cmd = perl -pe 's/[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/*.*.*.*/' sourcetype = preprocess-foo NO_BINARY_CHECK = tru [preprocess-foo] invalid_cause = archive is_valid = False LEARN_MODEL = false [foo] DATETIME_CONFIG = NONE SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+) The transmitted events will be masked before they're sent to the receiver: Regular expressions that work with SEDCMD should work with Perl without modifications. The unarchive_cmd setting is a flexible alternative to scripted and modular inputs. The sources do not have to be archive files. As others have noted, you can deploy different props.conf configurations to different forwarders. Your props.conf settings for line breaks, timestamp extraction, etc., should be deployed to the next downstream instance of Splunk Enterprise (heavy forwarder or indexer) or Splunk Cloud.
Hello, To check version of all apps and add-ons installed on Splunk via CLI? ./splunk list app && ./splunk list add-ons | sort AND try this one in GUI in Splunk search query and run this query... See more...
Hello, To check version of all apps and add-ons installed on Splunk via CLI? ./splunk list app && ./splunk list add-ons | sort AND try this one in GUI in Splunk search query and run this query: | rest /services/apps/local splunk_server=local | table title version
query 1: |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round((... See more...
query 1: |mstats sum(transaction) as Total sum(success) as Success where index=metric-index transaction IN(transaction1, transaction2, transaction3) by service transaction |eval SuccessPerct=round(((Success/Total)*100),2) |xyseries service transaction Total Success SuccessPerct |table service "Success: transaction1" "SuccessPerct: transaction1" "SuccessPerct: transaction2" "Total: transaction2" "Success: transaction2" |join service [|mstats sum(error-count) as Error where index=metric-index by service errortype |append [|search index=app-index sourcetype=appl-logs (TERM(POST) OR TERM(GET) OR TERM(DELETE) OR TERM(PATCH)) OR errorNumber!=0 appls=et |lookup app-error.csv code as errorNumber output type as errortype |stats count as app.error count by appls errortype |rename appls as service error-count as Error] |xyseries service errortype Error |rename wvv as WVVErrors xxf as nonerrors] |addtotals "Success: transaction1" WVVErrors nonerrors fieldname="Total: transaction1" |eval sort_service=case(service="serv1",1,service="serv2",2,service="serv3",3,service="serv4",4,service="serv5",5,service="serv6",6,service="serv7",7,service="serv8",8,service="serv9",9,service="serv10",10) |sort + sort_service |table service "Success: transaction1" "SuccessPerct: transaction2" WVVErrors nonerrors |fillnull value=0   query1 OUTPUT: service Success: transaction1 SuccessPerct: transaction2 WVVErrors  nonerrors serv1 345678.000000 12.33 7.000000 110.000000 serv2 345213.000000 22.34 8777.000000 0 serv3 1269.000000 12.45 7768.000000 563 serv4 34567.000000 11.56 124447.000000 0 serv5 23456.000000 67.55 10.000000 067 serv6 67778.000000 89.55 15.000000 32 serv7 34421.000000 89.00 17.000000 56 serv8 239078.000000 53.98 37.000000 67.0000000 serv9 769.000000 09.54 87.000000  8.00000 serv10 3467678.000000 87.99 22.000000 27.000000 serv11 285678.000000 56.44 1123.000000 90.00000 serv12 5123.000000 89.66 34557.000000 34 serv13 678.000000 90.54 37.000000 56 serv14 345234678.000000 89.22 897.000000 33 serv15 12412.33678.000000 45.29 11237.000000 23.000000 query2: |mstats sum(error-count) as Error where index=metric-index by service errorNumber errortype query2: output: service errorNumber errortype Error serv1 0 wvv 7.000000 serv1 22 wvv 8777.000000 serv1 22 wvv 7768.000000 serv1 45 wvv 124447.000000 serv2 0 xxf 10.000000 serv2 22 xxf 15.000000 serv2 22 xxf 17.000000 serv2 45 xxf 37.000000 serv3 0 wvv 87.000000 serv3 22 wvv 22.000000 serv3 22 wvv 1123.000000 serv3 45 wvv 34557.000000 serv4 0 xxf 37.000000 serv4 26 xxf 897.000000 serv4 22 xxf 11237.000000 serv4 40 xxf 7768.000000 serv5 25 wvv 124447.000000 serv5 28 wvv 10.000000 serv5 1000 wvv 15.000000 serv5 10 wvv 17.000000 serv6 22 xxf 37.000000 serv6 34 xxf 87.000000 serv6 88 xxf 22.000000 serv6 10 xxf 45.000000   we want to combine query 1 and query2 and want to get the both outputs in one table.
Here on StackOverFlow  I share a script to change log4j2.xml and proxy.py automatically. Besides the script, I put a tip on how to do this inside a Dockerfile. Basically, I replace the hardcoded va... See more...
Here on StackOverFlow  I share a script to change log4j2.xml and proxy.py automatically. Besides the script, I put a tip on how to do this inside a Dockerfile. Basically, I replace the hardcoded values on log4j2.xml and proxy.py by environment variables that allow you to change them more easily on deploy. log4j2.xml with environment variable proxy.py with environment variable def configure_proxy_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper() pass def configure_watchdog_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper() pass
I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, i... See more...
I came across a workaround that was suggested in an unofficial capacity by someone at AppDynamics during their local lab explorations. While this solution isn't officially supported by AppDynamics, it has proven to be effective for adjusting the log levels for both the Proxy and the Watchdog components within my AppDynamics setup. I'd like to share the steps involved, but please proceed with caution and understand that this is not a sanctioned solution. I recommend changing only the log4j2.xml file, because the proxy messages look like are responsible for almost 99% of the log messages. Here's a summary of the steps: Proxy Log Level: The log4j2.xml file controls this. You can find it within the appdynamics_bindeps module. For example, in my WSL setup, it's located at /home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml. In the Docker image python:3.9, the path is /usr/local/lib/python3.9/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml. Modify the seven log level itens <AsyncLogger> within the <Loggers> section to one of the following: debug, info, warn, error, or fatal. Watch Dog Log Level: This can be adjusted in the proxy.py file found within the appdynamics Python module. For example, in my WSL setup, it's located at /home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics/scripts/pyagent/commands/proxy.py. In the Docker image python:3.9, the path is /usr/local/lib/python3.9/site-packages/appdynamics/scripts/pyagent/commands/proxy.py. You will need to hardcode the log level in the configure_proxy_logger and configure_watchdog_logger functions by changing the level variable. My versions $ pip freeze | grep appdynamics appdynamics==24.2.0.6567 appdynamics-bindeps-linux-x64==24.2.0 appdynamics-proxysupport-linux-x64==11.68.3 log4j2.xml <Loggers> <AsyncLogger name="com.singularity" level="info" additivity="false"> <AppenderRef ref="Default"/> <AppenderRef ref="RESTAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name="com.singularity.dynamicservice" level="info" additivity="false"> <AppenderRef ref="DynamicServiceAppender"/> </AsyncLogger> <AsyncLogger name="com.singularity.ee.service.datapipeline" level="info" additivity="false"> <AppenderRef ref="DataPipelineAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name="com.singularity.datapipeline" level="info" additivity="false"> <AppenderRef ref="DataPipelineAppender"/> <AppenderRef ref="Console"/> </AsyncLogger> <AsyncLogger name="com.singularity.BCTLogger" level="info" additivity="false" > <AppenderRef ref="BCTAppender"/> </AsyncLogger> <AsyncLogger name ="com.singularity.api" level="info" additivity="false"> <AppenderRef ref="APIAppender"/> </AsyncLogger> <AsyncLogger name="com.singularity.segment.TxnTracer" level="info" additivity="false"> <AppenderRef ref="Default"/> <AppenderRef ref="Console"/> </AsyncLogger> <Root level="error"> <AppenderRef ref="Console"/> <AppenderRef ref="Default"/> </Root> </Loggers> proxy.py def configure_proxy_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = logging.DEBUG if debug else logging.INFO pass def configure_watchdog_logger(debug): logger = logging.getLogger('appdynamics.proxy') level = logging.DEBUG if debug else logging.INFO pass Suggestion Change the hardcoded variables by environment variables: log4j2.xml file, change from [level="info"] to proxy.py file, change from [logging.DEBUG if debug else logging.INFO] to [os.getenv("APP_APPD_WATCHDOG_LOG_LEVEL", "info").upper()] Warning Please note, these paths and methods may vary based on your AppDynamics version and environment setup. Always backup files before making changes and be aware that updates to AppDynamics may overwrite your customizations. I hope this helps!
Explain Splunk Enterprise Event Collector, Processor and Console architecture.
Hi @yuanliu , as suggested, I tried below query, but i am not getting expected output. I mean i am not getting previous hour data under delta row.  all values are 0. please see my output.   |mstats... See more...
Hi @yuanliu , as suggested, I tried below query, but i am not getting expected output. I mean i am not getting previous hour data under delta row.  all values are 0. please see my output.   |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application |streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application |sort application _time |eval delta = if(Trans_max == Trans, delta, "-" . delta) |eval pct_delta = delta / Trans * 100 |fields - Trans_max     Output: _time application Trans delta_Trans pct_delta_Trans 2022-01-22 02:00 app1 3456.000000 0.000000 0.000000 2022-01-22 02:00 app2 5632.000000 0.000000  0.000000 2022-01-22 02:00 app3 5643.000000 0.000000 0.000000 2022-01-22 02:00 app4 16543.00000 0.000000  0.000000
"><h1>h1</h1>
That's not the prettiest data. In order to make some sense of this data you have to first parse the outer array | spath {} Then you have to split the resulting multivalued field | mvexpand {} No... See more...
That's not the prettiest data. In order to make some sense of this data you have to first parse the outer array | spath {} Then you have to split the resulting multivalued field | mvexpand {} Now you can parse each of those structures separately | spath input={}